#2015-07-2517:13val_waeselynckSharing a moment of joy here: migrating my web server from MongoDB to Datomic, I just rewrote my multi-level authorization system in pure Datalog. Rules freaking rule.#2015-07-2611:37robert-stuttaford\o/ @val_waeselynck#2015-07-2611:39robert-stuttaford@shofetim: if you’re ok with System Time being synonymous with Domain time - that is, datomic’s transaction time is good enough for your app’s notion of the time when things occured, then you can simply use d/as-of and d/since to constrain queries about time#2015-07-2611:40robert-stuttafordif you have timestamps that differ to Datomic’s transaction time, then you can annotate transactions as you write them#2015-07-2611:41robert-stuttaford@(d/transact conn
[[:db/add (d/tempid :db.part/tx) :your.domain/timestamp ts]
... ])
#2015-07-2611:41robert-stuttafordanything you write to a tempid for :db.part/tx will be asserted on the reifed transaction entity directly#2015-07-2611:42robert-stuttafordthen, you can use d/filter to write your own as-of and since filters. i’ve not yet done this personally, so i don’t know how to do that performantly, yet.#2015-07-2612:20val_waeselynckCan I pass a set of 3-tuples as a data source to a Datalog query and treat it exactly as I would a Datomic database value? I think I read it's possible, but can't really get it to work#2015-07-2612:23val_waeselynckNvm I just had to remove the ? from datasources names#2015-07-2618:27kachayev@val_waeselynck: here is a nice set of examples from Stuart for doing this: https://gist.github.com/stuarthalloway/2645453#2015-07-2621:41val_waeselynck@kachayev thanks!#2015-07-2707:19kachayevI wonder if it’s possible to know in advanced how many segments will be fetched from storages during query execution?#2015-07-2707:19kachayevI mean I understand that it depends on query itself, schema, data that was already fetched etc#2015-07-2707:19kachayevSomething like “select explain"#2015-07-2708:24robert-stuttafordi saw someone had made a library to attempt to count datoms at each clause, but that was before all the new stuff was added#2015-07-2708:25robert-stuttafordfrom reading the questions the author was asking and the sort of answer he was getting from Cognitect, it was directly in Datomic’s secret-sauce and no real progress was made#2015-07-2708:25robert-stuttafordif such a facility exists, it’ll be because Cognitect provides it#2015-07-2708:38kachayevsure#2015-07-2709:33robert-stuttafordanyone using the pull spec have a simple way to flatten the result such that all nested maps are merged with the root map?#2015-07-2713:28akiel@kachayev: i think the best thing you can do is monitoring your storage. as far as i know, index segments have a fixed size in kilobytes. so even your data influences how many segments are needed for a query. the peer also caches all segments. so running a reasonable small query again should not reach out to the storage twice. in queries, it’s important to order the clauses by the number of datoms they bind. by having the most specific clause first, you will get the best performance.#2015-07-2713:29kachayevright#2015-07-2713:29kachayevthe question is that it’s hard to keep track of "order the clauses by the number of datoms” when you have many queries#2015-07-2713:29kachayevand/or dynamically built queries#2015-07-2713:31kachayevit’s also hard to “play” with data locality - it just takes too much time to do: change schema, load everything, run a lot of queries, then analyze charts about network consumption & storage performance (and they are not that obvious usually)#2015-07-2713:32robert-stuttaforddynamically built queries are not such a good idea. d/q caches the preparatory work it does for its first param. better to have standard queries with dynamic :in values#2015-07-2713:33robert-stuttafordhear you on the ease of play thing. immutability does have its downsides simple_smile#2015-07-2713:34robert-stuttafordi’ve just checked, we have 500+ invocations of d/q in our projects#2015-07-2713:34robert-stuttafordand we’ve done ok with perf testing each one as we go#2015-07-2713:35robert-stuttafordordering clauses such that :in values are handled early and :find values late, and then testing swapping things around in the middle#2015-07-2713:36robert-stuttafordon network traffic, you can stick memcached in the middle to get a big overall read perf boost#2015-07-2713:39kachayevdidn’t get the idea about “immutability” and “data locality” (in terms of “immutability downsides”). orthogonal concepts as for me#2015-07-2714:03akiel@kachayev: right a query planner may be helpful in bigger projects - I once heard from Rich that he likes the control one have when there is no query planner. I think he was bitten by some SQL query planner in the past. If he still thinks the same and paying customers do not complain a lot, do not expect a query planner very soon.#2015-07-2714:05akiel@kachayev: I expect data locality is also not easy to track down inside say Oracle accessing files in a SAN. Other than that there is better tooling around.#2015-07-2714:08kachayevI can’t say that it’s a kind of “complain”, just curiosity. I understand that most modern databases don’t provide any tooling for this as well, so it’s not a “must-have” and definitely not a “deal-breaker”.#2015-07-2714:09robert-stuttafordi was talking more to the busy-work of having to recreate databases with new schema etc to test different setups#2015-07-2715:57jelleaIs the Datomic documentation available offline (as pdf, dash docset, repo)?#2015-07-2716:00tcrayford@akiel: note: not all segments are cached, log segments aren't, neither are things from the gc#2015-07-2717:22akiel@tcrayford: what do you mean with gc?#2015-07-2717:24tcrayfordthe "list of segments to gc" is stored in storage somewhere. When you run d/gc-storage it has to query that stuff and it's not cached#2015-07-2717:52akielah ok - so this is only a maintenance thing#2015-07-2718:21robert-stuttaford@jellea: nope. http://docs.datomic.com is it#2015-07-2814:04ljosaDoes anyone here use Datomic with multiple Couchbase clusters and cross-datacenter replication (XDCR)?#2015-07-2818:53stuartsierra@ljosa: Datomic is, in general, not designed to support cross-datacenter operation with one Transactor pair.#2015-07-2818:57stuartsierraCross-datacenter replication strategies usually allow data to diverge between the two datacenters, with some kind of arbitrary rule for conflict resolution. This is not a strong enough guarantee to preserve Datomic's consistency model.#2015-07-2818:59stuartsierraFor example, for Couchbase, http://docs.couchbase.com/admin/admin/XDCR/xdcr-architecture.html
'XDCR … provides eventual consistency across clusters. If a conflict occurs, the document with the most updates will be considered the “winner.” '#2015-07-2819:00ljosaHmm … but doesn’t Datomic store immutable segments, always with new keys?#2015-07-2819:00tcrayfordnot for the roots#2015-07-2819:00tcrayfordthe roots require CAS or consistent put#2015-07-2819:01tcrayford(they just contain uuids to the immutable segments afaik)#2015-07-2819:01stuartsierraYes, as @tcrayford says, there is one important piece that is not immutable: the pointer to the "root" of each database value.#2015-07-2819:01tcrayfordack, wrong term s/root/"pointer to the root"/g#2015-07-2819:02tcrayfordthere are afaik like 4-6 or something of those as well, not just a single thing#2015-07-2819:02stuartsierraAlso, the immutable segments are nodes in a tree structure… if the tree has a new root but not all the leaves have been replicated across the datacenters, you would see inconsistent results. Datomic doesn't allow this, so it would appear as unavailability.#2015-07-2819:02tcrayford(uh: db heartbeat, log tail, log, indexes, gc)#2015-07-2819:03stuartsierraBasically, you can't get Datomic's strong consistency guarantees and cross-datacenter (or cross-region) replication at the same time. simple_smile#2015-07-2819:03tcrayfordphysics, a thing#2015-07-2819:20ljosaI believe conflicts cannot happen in this case because the replication is one-way from the cluster that Datomic writes to. But I see that point that Datomic will be confused if the mutable documents are updated in the wrong order or if the leaves of the tree are delayed. Do you know how Datomic would react in such cases? Would it throw an exception?
(That might be OK: from playing with Datomic and XDCR, it seems that replication delays are usually masked because recent datoms are cached in the memory index, which is transferred directly from the transactor to the peers.)#2015-07-2819:21ghadiare entities comparable?#2015-07-2819:21ghadilike if I access a ref (through navigation) on two different database values.#2015-07-2819:22ghadiI should just test it out... but chat room#2015-07-2819:24stuartsierra@ljosa: In general Datomic will always prefer an error to returning inconsistent results. But you should be aware that cross-datacenter replication is not a supported use case so anything it does is, by definition, undefined behavior.#2015-07-2819:25ljosaunderstood. thank you for good answers.#2015-07-2819:26ljosaI suppose we’ll have to get by with a single Couchbase cluster in a single AZ and hope that caching in the peers together with the memory index is enough to smooth over AWS glitches.#2015-07-2819:27ljosaI suppose Datomic must be using strongly consistent reads when it’s running on DynamoDB?#2015-07-2819:37tcrayfordfor the pointers to roots, yeah, CAS too iirc#2015-07-2819:37tcrayford(I think dynamo supports it)#2015-07-2820:42arohneryes, dynamo supports strongly consistent reads#2015-07-2820:44arohnerand CS#2015-07-2820:44arohners/CS/CAS/#2015-07-2820:45arohnersome systems (not sure if dynamo is one), only have problems w/ consistent reads when (ab)using mutability#2015-07-2820:45arohnerif you never update-in-place, it will give you correct results for “give me the segment w/ this UUID”, even when using eventual consistency#2015-07-2908:55robert-stuttafordshould i be seeing a different d/basis-t value for an d/as-of database in the past?#2015-07-2908:56robert-stuttafordno matter what i do, i’m always getting the same basis-t and next-t values back#2015-07-2909:06robert-stuttafordanyone have any ideas?#2015-07-2909:51tcrayford@robert-stuttaford: I think it's because basis-t is an implementation detail that's leaking through as-of? Recall the part of @stuarthalloway's recent ete datalog talk about the history filters. as I understand things, as-of etc are implemented as things that a) filter the index b) merge the live index and parts of the history index. I don't think any of those actually needs to affect basis-t#2015-07-2909:57robert-stuttafordthanks, tom#2015-07-2909:59tcrayford@robert-stuttaford: uh, bad terminology there. When I say "live index" in that para, I mean as opposed to the historical indexes, not the peer in memory stuff#2015-07-2909:59robert-stuttafordyeah#2015-07-2909:59robert-stuttafordi kinda got that#2015-07-2909:59tcrayfordsimple_smile#2015-07-2914:28caskolkm@(d/transact conn [[:db.fn/retractEntity (BigInteger. company-id)]]
causes a:#2015-07-2914:28caskolkmdb.error/reset-tx-instant You can set :db/txInstant only on the current transaction#2015-07-2914:28caskolkmDoes anyone know what I'm doing wrong?#2015-07-2914:31bostonaholic@caskolkm: I was getting the same error earlier this week. Unfortunately, I don't believe I solved it. (Just deleted and recreated my local db)#2015-07-2914:38bostonaholicI know that probably doesn't help much 😜#2015-07-2914:41bostonaholicplus I usually do (Long. eid)#2015-07-2914:55bkamphaus@robert-stuttaford: kind of late, but if you want the as-of point you need to use as-of-t — http://docs.datomic.com/clojure/#datomic.api/as-of-t, likewise since requires since-t http://docs.datomic.com/clojure/#datomic.api/since-t#2015-07-2915:25robert-stuttaford@bkamphaus: thanks. i do recommend you update the docstrings for basis-t and next-t, because they are not correct#2015-07-2915:26robert-stuttaford"Returns the t of the most recent transaction reachable via this db value.”#2015-07-2915:27robert-stuttafordperhaps just a note that its result is not constrained by d/as-of or d/since#2015-07-2915:47caskolkm@bostonaholic: weird.. Hopefully someone else knows the answer :)#2015-07-2915:48bostonaholicI just tried it again and it worked...#2015-07-2915:50bkamphaus@robert-stuttaford: I think you’re correct that there may be doc improvements that would make sense around using filters, but I don’t think that is necessarily one of them. Sorry, I’m still working on how to best phrase it, but the db value returned by a call to a filter (`as-of`, since, or your own) - is a db value with the filter applied. the as-of-t and basis-t are different, but basis-t is still the correct basis of the filtered db, even though the filter may filter our the most recent (or several of the most recent) tx(es).#2015-07-2915:52bkamphausanother angle of this, that the db-after returned by with for an as-of db filter filters out the prospective data, is also surprising.#2015-07-2915:55bkamphaus@bostonaholic: and cc @caskolkm if you’re able to repro I’d be curious to see, but you’re correct that entity id’s should be java.lang.Long and I’d want to see it repro’d using the correct type of arg to retractEntity#2015-07-2916:02robert-stuttaford@bkamphaus: thank you for your feedback#2015-07-2916:03robert-stuttafordnot sure what the right way forward is, i just know that it’s not obvious that this is the case, and it can catch people. certainly caught me, and i’ve been using Datomic for a long time#2015-07-2916:06caskolkm@bostonaholic: can you show me your code? #2015-07-2916:08bostonaholicit was the same as yours, just (Long. eid) is different#2015-07-2916:08caskolkmOk, i will try it tomorrow#2015-07-2916:30tcrayford@robert-stuttaford: as somebody who's also been using datomic for a long time, I agree that it's confusing 😞#2015-07-3003:39erichmondis GPG broke on el capitan? I am trying to work with datomic-pro and am getting "pg: gpg-agent is not available in this session” even thought that daemon is indeed running#2015-07-3006:21caskolkm@bkamphaus @bostonaholic: still the same error using: @(d/transact conn [[:db.fn/retractEntity (Long. company-id)]])#2015-07-3012:42mitchelkuijpers@bkamphaus: @bostonaholic I found our problem with retracting an entity somehow we saved the entities in the db.part/tx
which is obviously wrong 😅#2015-07-3014:21bkamphaus@mitchelkuijpers: that would do it. Note that to have gotten this outcome if using e.g the map form, you’d have specified a tempid for :db.part/tx in the same attr. These attributes then become attributes on the transaction entity.
For annotating a transaction in the map form case you would need to supply separate maps for the attributes intended as tx annotations and attributes intended for a new or existing entity. Example (though Java) at: http://docs.datomic.com/transactions.html#reified-transactions#2015-07-3014:34erichmondFollow-up : It was because I didn’t fully nuke the gpg installed by brew before installing the one recommended by leiningen.#2015-07-3014:44maxI just realized that postgres is using 30gb of storage for a small app#2015-07-3014:45maxI assume this is because I havent been garbage collecting, or am I messing something up bad#2015-07-3015:05erichmondWhat is the best tutorial for someone who wants to use datomic + clojure#2015-07-3015:05erichmondthese docs are a mess#2015-07-3015:22bkamphaus@erichmond: the day-of-datomic repro is at: https://github.com/Datomic/day-of-datomic — if you’re talking about the tutorial on the docs page, it’s available in clojure in the datomic directory as mentioned here: http://docs.datomic.com/tutorial.html#following-along — if you’re looking at query specifically: http://docs.datomic.com/query.html points to the clojure examples from day-of-datomic here: https://github.com/Datomic/day-of-datomic/blob/master/tutorial/query.clj#2015-07-3015:23erichmond@bkamphaus: thanks, also, this datomic for 5 year olds is helping too#2015-07-3015:25marshall@erichmond: We also have the full Day of Datomic training session as a series of videos here: http://www.datomic.com/training.html#2015-07-3015:25bhaganyI really got a lot out of those videos, fwiw#2015-07-3015:26meowI've heard good things about http://www.learndatalogtoday.org/#2015-07-3015:26erichmondthanks, I’ll check out the videos too.#2015-07-3015:27bkamphaus@max: you should be doing some gc http://docs.datomic.com/capacity.html#garbage-collection — you may also have to take additional steps for postgres (and other storages) to reclaim data, e.g. VACUUM https://www.postgresql.org/docs/9.1/static/sql-vacuum.html#2015-07-3015:27erichmondActually, all the querying and whatnot is pretty straightforward to me#2015-07-3015:27maxso it looks like my vm ran out space (I had a 40gb vm)#2015-07-3015:27maxI upped the disk space#2015-07-3015:27maxand my database size is only growing#2015-07-3015:27maxand the transactor is unavailable#2015-07-3015:27erichmondI was looking more for “10 steps to firing up a mem based datomic connection” “10 steps to firing up a dev based datomic connection + datomic console”#2015-07-3015:27maxwill this resolve itself?#2015-07-3015:28erichmondI’m realizing now, if I want to run mem, I don’t even seem to need to download that datomic.zip, etc#2015-07-3015:29bkamphaus@max not enough info to tell. can you tail the logs to see if the txor is busy? e.g. indexing#2015-07-3015:29maxbkamphaus: debugging this I also found that my only log file is log/2015-06-26.log.#2015-07-3015:30maxI kept the default logback.xml#2015-07-3015:30maxso that’s another issue#2015-07-3015:30maxis there another place they could be?#2015-07-3015:30bkamphaus@max: does your transactor properties file specify a different log location?#2015-07-3015:33maxbkamphaus: ah thanks. Okay so it’s indexing#2015-07-3015:35maxaw crap#2015-07-3015:35maxI may have done a bad thing.#2015-07-3015:36maxI accidentally shoved some ~860kb strings into datoms#2015-07-3015:36maxam I hosed here?#2015-07-3015:38bkamphauswell it definitely can kill perf stuff, and will depend on how your system is provisioned. But yeah, you definitely want to avoid large blobby stuff in datoms. options for recovery — do you have a recent backup? You can also excise that stuff.#2015-07-3015:38bkamphausare those fields in :avet? i.e. indexed -- that’s when it will hurt the most by far.#2015-07-3015:38maxthey are indexed#2015-07-3015:42maxbkamphaus: in the future, if i want to store this, doing noHistory and without index would be a bad idea still?#2015-07-3015:43bkamphausless of a bad idea, but I’d still avoid it. Indexing it guarantees that it will be a huge perf drag. Your best option for blob/document type stuff is to put in storage directly and store the pointer/ref/key w/e for it in Datomic in the datom#2015-07-3015:44bkamphausor a file store, e.g. s3#2015-07-3016:04bkamphaus@arohner: Stu has replied re: your questions/issues on bytes reported here in slack and on group https://groups.google.com/forum/#!topic/datomic/JqXcURuse1M#2015-07-3016:04arohner@bkamphaus: yeah I just saw, thanks#2015-07-3016:04bkamphausDatomic 0.9.5206 has been released https://groups.google.com/forum/#!topic/datomic/kEAqsjeeMaE#2015-07-3016:19erichmond@bkamphaus: do you work on datomic for cognitect?#2015-07-3016:39bkamphaus@erichmond: yes, I’m on the Datomic team at Cognitect.#2015-07-3016:39erichmondvery cool!#2015-07-3016:43bkamphausI agree that it’s very cool to be on this team. simple_smile Also, typing is hard.#2015-07-3017:01maxbkhamphaus: thanks for your help so far.#2015-07-3017:02maxI tried to run a garbage collect and an excision of one of the attributes#2015-07-3017:02maxmy database size is still growing (33->46 gb in the past hour)#2015-07-3017:02maxand datomic is running at 100% cpu#2015-07-3017:03maxheres a tail of the log#2015-07-3017:03maxso it looks like im still indexing?#2015-07-3017:04bkamphaus@max where you’re at, you’re waiting on indexing to push through — it will have to complete before space can be reclaimed and it will probably take longer for excision, etc. (more indexing necessary) — gc also competes for transactor resources — cpu/mem.#2015-07-3017:04bkamphausfrom the log tail, seems that way#2015-07-3017:04maxhow long can I expect to wait and is there anything I can do to speed it up#2015-07-3017:05maxmy hd is now 160gb, can I be reasonably sure I won’t hit that?#2015-07-3017:05bkamphaushow many attr val pairs were targeted by the excision?#2015-07-3017:05max1#2015-07-3017:05maxI was just doing a test on one datom.#2015-07-3017:14bkamphaus@max can you grep for successfully completed indexing jobs, e.g. :CreateEntireIndexMsec metrics, index specific completion messages, grep ":[ea][aev][ve]t, :phase :end" *.log, possible failures (just grep for AlarmIndexingFailed.#2015-07-3017:16maxthe last index specific completion was 3 hours ago
2015-07-30 11:58:23.370 INFO default datomic.index - {:tid 150, :I 5265000.0, :index :eavt, :phase :end, :TI 8465930.540997842, :pid 1480, :event :index/merge-mid, :count 2110, :msec 14400.0, :S -283878.0409978423, :as-of-t 961005}
#2015-07-3017:17maxI have an AlarmIndexingFailed once a minute
2015-07-30 13:15:47.572 INFO default datomic.process-monitor - {:tid 13, :AlarmIndexingFailed {:lo 1, :hi 1, :sum 4, :count 4}, :CreateEntireIndexMsec {:lo 16500, :hi 18600, :sum 70500, :count 4}, :MemoryIndexMB {:lo 0, :hi 0, :sum 0, :count 1}, :StoragePutMsec {:lo 1, :hi 239, :sum 11097, :count 381}, :AvailableMB 2640.0, :IndexWriteMsec {:lo 1, :hi 659, :sum 35259, :count 381}, :RemotePeers {:lo 1, :hi 1, :sum 1, :count 1}, :HeartbeatMsec {:lo 5000, :hi 5346, :sum 60427, :count 12}, :Alarm {:lo 1, :hi 1, :sum 4, :count 4}, :StorageGetMsec {:lo 0, :hi 124, :sum 2204, :count 305}, :pid 1480, :event :metrics, :StoragePutBytes {:lo 103, :hi 4568692, :sum 128385966, :count 382}, :ObjectCache {:lo 0, :hi 1, :sum 231, :count 536}, :MetricsReport {:lo 1, :hi 1, :sum 1, :count 1}, :StorageGetBytes {:lo 1853, :hi 4568435, :sum 95278692, :count 305}}
2015-07-30 13:16:47.573 INFO default datomic.process-monitor - {:tid 13, :TransactionDatoms {:lo 3, :hi 3, :sum 3, :count 1}, :AlarmIndexingFailed {:lo 1, :hi 1, :sum 3, :count 3}, :GarbageSegments {:lo 2, :hi 2, :sum 4, :count 2}, :CreateEntireIndexMsec {:lo 15800, :hi 17400, :sum 50500, :count 3}, :MemoryIndexMB {:lo 0, :hi 0, :sum 0, :count 1}, :StoragePutMsec {:lo 1, :hi 291, :sum 11173, :count 474}, :TransactionBatch {:lo 1, :hi 1, :sum 1, :count 1}, :TransactionBytes {:lo 102, :hi 102, :sum 102, :count 1}, :AvailableMB 2460.0, :IndexWriteMsec {:lo 2, :hi 350, :sum 36373, :count 471}, :RemotePeers {:lo 1, :hi 1, :sum 1, :count 1}, :HeartbeatMsec {:lo 5000, :hi 5003, :sum 60006, :count 12}, :Alarm {:lo 1, :hi 1, :sum 3, :count 3}, :StorageGetMsec {:lo 0, :hi 100, :sum 2151, :count 351}, :TransactionMsec {:lo 19, :hi 19, :sum 19, :count 1}, :pid 1480, :event :metrics, :StoragePutBytes {:lo 86, :hi 4568692, :sum 146567666, :count 473}, :LogWriteMsec {:lo 8, :hi 8, :sum 8, :count 1}, :ObjectCache {:lo 0, :hi 1, :sum 247, :count 598}, :MetricsReport {:lo 1, :hi 1, :sum 1, :count 1}, :PodUpdateMsec {:lo 2, :hi 7, :sum 9, :count 2}, :StorageGetBytes {:lo 86, :hi 4568435, :sum 94665879, :count 351}}
#2015-07-3017:19bkamphaus@max which version of Datomic are you running?#2015-07-3017:19maxdatomic-pro-0.9.5173#2015-07-3017:22bkamphauscan you do a failover or start/restart to upgrade to 0.9.5201 (or latest 0.9.5206) to see if the indexing job is then able to run to completion?#2015-07-3017:22maxokay#2015-07-3017:22maxany reason to go with 5201 vs 5206?#2015-07-3017:23bkamphausI’d just drop into latest 5206 if no preference, 5201 is just minimal to get past a fix for a related issue. 0.9.5206 only adds error handling/explicit limits to byte attributes#2015-07-3017:32maxbkamphaus: I updated, am getting some out of memory errors
015-07-30 13:31:43.668 WARN default datomic.update - {:tid 77, :pid 10386, :message "Index creation failed", :db-id "canary-f3e9a40e-2036-4ad9-aae7-52919cced434"}
java.lang.OutOfMemoryError: Java heap space
#2015-07-3017:33maxI’m using
# Recommended settings for -Xmx4g production usage.
memory-index-threshold=32m
memory-index-max=512m
object-cache-max=1g
#2015-07-3017:35bkamphaus@max some follow up q’s then — can you verify you’re using GC defaults? Either only setting Xmx, xmx as transactor args, or if using JAVA_OPTS, adding -XX:+UseG1GC -XX:MaxGCPauseMills=50 to keep GC defaults? Also, would it be possible to up -Xmx (what’s current + available on machine)?#2015-07-3017:36max exec /var/lib/datomic/runtime/bin/transactor -Xms4g -Xmx4g /var/lib/datomic/transactor.properties 2>&1 >> /var/log/datomic/datomic.log#2015-07-3017:36maxthat’s my datomic command#2015-07-3017:36maxI could up the memory, should I change transactor props also?#2015-07-3017:41bkamphaus@max I would up memory, double it if you can — maybe up object-cache-max only slightly (i.e. to 25% of heap or so, not up to 1/2 for sure). I.e. something like -Xmx 8g, object-cache-max=2g, rest same#2015-07-3017:41maxok#2015-07-3017:49maxbkamphaus: the excision finished!#2015-07-3017:49maxthanks!#2015-07-3017:50bkamphaus@max awesome — make sure and spread out the excision the way you’d normally pipeline txes on an import#2015-07-3017:50bkamphausassuming you’re following up by removing more of the blobby string vals#2015-07-3017:50maxthere are only 35 attrs to excise#2015-07-3017:51max…my postgres db size is at 51gbs though#2015-07-3017:51bkamphausah, cool, so less of an issue then. as stuff pushes through, you’ll be able to run gc (or maybe it’s already running?)#2015-07-3017:51maxI ran a datomic garbage collect and it didn’t seem to do much, I assume I should run it again and vacuum#2015-07-3017:51bkamphausit runs async#2015-07-3017:52bkamphausbut yes you should do it after excision, more segments will need to be gc’d after that#2015-07-3017:52bkamphausthe gc-storage call when finished will log something like: 2014-08-08 03:24:14.174 INFO default datomic.garbage - {:tid 129, :pid 2325, :event :garbage/collected, :count 10558}#2015-07-3017:52maxso, how did this happen? I had 35 blobs some of which were like a meg at most. And the rest of my data is pretty small. How did my db grow to 51gigs?#2015-07-3017:52maxAnd how do I make sure it doesn’t happen again, garbage collect daily?#2015-07-3017:53bkamphausI don’t know how much segment churn you go through, but it does build up over time from indexing. The blobs can be particularly bad with :avet on.#2015-07-3017:54bkamphausNightly may not be necessary, but you can set up a gc-storage call to run at w/e period you determine is necessary#2015-07-3017:55tcrayford(as a side reference, for my [relatively normal] webapp, I run it at application bootup, because only the webservers are datomic peers and they're deployed together)#2015-07-3017:55bkamphausand then periodically I’m assuming you’ll need to VACUUM in postgres before space is reclaimed since the deletion in Datomic will be handled/deferred by table logic in the storage#2015-07-3017:56bkamphausi.e. Cassandra via tombstone, Oracle space reclamation is deferred by High-water Mark stuff, etc.#2015-07-3017:58maxcool, thanks so much for your help @bkamphaus#2015-07-3018:09micahWeird datomic error throwing me for a loop:#2015-07-3018:09micahairworthy.repl=> @(api/transact @db/connection [{:segue/time #inst "2015-04-09T05:32:48.000-00:00", :segue/way :out, :segue/airport 277076930200614, :segue/user 277076930200554, :db/id 277076930200690}])
IllegalArgumentExceptionInfo :db.error/not-an-entity Unable to resolve entity: Thu Apr 09 00:32:48 CDT 2015 in datom [277076930200690 :segue/user #inst "2015-04-09T05:32:48.000-00:00"] datomic.error/arg (error.clj:57)#2015-07-3018:10micahWhy does it think the date is an entity?#2015-07-3018:11shaunxcodewhat is schema for :segue/time ?#2015-07-3018:11micah:instant#2015-07-3018:11shaunxcodeand :segue/user ?#2015-07-3018:12micah:ref#2015-07-3018:14mitchelkuijpersThank you for your help @bkamphaus#2015-07-3019:05max@bkamphaus: I ran a garbage collect and a vacuum, but pg still says the datomic database size is 51gb. Any suggestions?#2015-07-3019:24bkamphaus@max have you backed the db up recently so that you have a reference for how large the backup is?#2015-07-3020:09max@bkamphaus: 150mb#2015-07-3020:10bkamphaus@micah: as a sanity check, I would verify that all entities in the transaction exist and that all attr keywords are specified correctly (e.g. spelled correctly) and exist, including the (assumed enum) :out entity — it may be that something else wrong in the transaction is causing it to resolve to the incorrect datom that’s transacting the date as the value for the :segue/user attr (the cause of the exception)#2015-07-3020:13bkamphaus@max that datomic db is the only one in that instance? you haven’t e.g. run tests that generate and delete dbs (note that dbs when deleted have to be [garbage collected](http://docs.datomic.com/capacity.html#garbage-collection-deleted), also). And definitely nothing else you’re storing in postgres?#2015-07-3020:14maxI only use memory dbs for testing, and I don’t run tests on the production instance#2015-07-3020:15maxThere is another database on the pg instance, but it’s tiny:
datomic=> SELECT pg_database.datname,
pg_size_pretty(pg_database_size(pg_database.datname)) AS size
FROM pg_database;
datname | size
------------+---------
template1 | 6314 kB
template0 | 6201 kB
postgres | 6314 kB
canary-web | 7002 kB
datomic | 51 GB
(5 rows)
#2015-07-3020:29bkamphaus@max have you restored versions of the same database when the restore has diverged? The incompatible restore is one thing I’m aware of which can potentially orphan segments so that they never get gc’d.#2015-07-3020:30maxI don’t think so. This is the production db, so it was initially restored from a seed db, and then only backed up#2015-07-3020:31maxit looks like a lot of the growth (20gbs worth!) happened after I ran out of disk space last night and was trying to do excisions.#2015-07-3020:32bkamphausDBs do pick up small amounts of cruft from operational churn, but this is well out of lines of my expectation for the size of it. Depending on what kind of outage you could tolerate, you could do a test restore from backup to a clean postgres in a dev/staging environment and seeing what the resulting table size it.#2015-07-3020:34bkamphausThe failure to index could be contributing then, maybe leaving orphaned segments somehow. There’s always the possibility of clobbering the table and starting from a clean restore, obviously you want to backup and test a restore as I mentioned above first before considering going down that path.#2015-07-3020:35bkamphausDo you know what the table size was prior to running into the indexing failure?#2015-07-3020:36maxI’m not sure, I ran out of disk space at ~30gb#2015-07-3020:36maxI’m assuming it’s going to affect performance to keep this 51gb database around#2015-07-3021:05maxSo I did a restore on my dev system, and the pg database is 142mb after restore.
I can do a restore in prod again, but I’m worried about this happening again. Any suggestions as to what to do at this point?#2015-07-3021:06maxis it possible I hit a bug in datomic?#2015-07-3021:12bkamphaus@max hard to speculate about a possible bug without knowing more specifics. I’m wondering how much of this can be attributed to the failures to index w/the blob-ish strings. My general advice would be to make sure and make regular backups, and configure some kind of monitoring for Alarm* events - so that you can jump in more quickly (i.e. reacting to AlarmIndexingFailed, rather than toe running out of size).#2015-07-3021:13maxbkamphaus: that makes sense, and it’s definitely my next plan of action#2015-07-3021:14maxwe ran out of space at db size 30 gb, so there must have been some failure before that that caused that 30gb to be written#2015-07-3021:14maxbut I guess that could have been cascading indexing failures?#2015-07-3021:15bkamphausI think it’s fairly typical for dbs in production over time to accumulate a little bit of cruft, but nothing like the difference in size from your backup to postgres table, which is why I think it must be linked to that indexing failure. I haven’t seen another report of that much excess size, usually when I’ve looked through those concerns about size differences it’s still less than 2-3x the expected size (after account for e.g. storages with replication factors, etc.) on dbs that have been running for a long time, nothing orders of magnitude larger than expected size like this — except with whole dbs not gc’d, or gc never having been run, etc.#2015-07-3021:16maxok. I’ll set up better monitoring and see if it happens again#2015-07-3021:17maxone more question: we’re not using aws, and I am using datadog for this data. Do you generally recommend to use the built in cloudwatch stuff and push that data to other services, or is integrating with a non-AWS monitoring service pretty easy?#2015-07-3021:21bkamphaus@max we definitely have users doing both. Cloudwatch is what we use at Cognitect and test the most, but lots of people on premise just configure their own callback ( http://docs.datomic.com/monitoring.html#sec-2 ) stuff or point it at various other logging/metric tools.#2015-07-3022:16micah@bkamphaus: Thanks for the tip. I verify everything is correctly spelled and schema-fied.#2015-07-3114:16raymcdermottquick question … can I filter datoms based on transaction data? I guess yes but is that via a standard query or do I have to write code?#2015-07-3114:18raymcdermottin my use case, I have two transactions on the same data but would sometimes like to show the data back to the user based on the source system (tracked in the transaction)#2015-07-3114:19tcrayford@raymcdermott: the transaction entities are perfectly queryable from normal queries#2015-07-3114:20raymcdermottok cool - that’s what I hoped but I cannot see any examples#2015-07-3114:21tcrayfordyou just join against them via the part of the datom that is the transaction id simple_smile#2015-07-3114:22raymcdermottah - ok so I wouldn’t need to use the history view? Or maybe I would combine that?#2015-07-3114:23raymcdermottI see the example#2015-07-3114:23raymcdermott[:db/add
#db/id[:db.part/tx]
:data/src
"http://example.com/catalog-2_29_2012.xml"]#2015-07-3114:23raymcdermottso (just to nail it) I can just add :data/src to the query?#2015-07-3114:24tcrayfordyou'd need to join against txid, but yeah#2015-07-3114:24raymcdermottah, ok and how would that work with pull?#2015-07-3114:24potetm@raymcdermott: I believe It depends if the data you’re interested in is in the current db or not.#2015-07-3114:25tcrayford(d/q '[:find ?src :where [_ :user/email _ ?tx] [?tx :data/src ?src]] …)#2015-07-3114:25tcrayfordwith pull - I wouldn't be too surprised if it didn't work with transaction entities, or if they did. If they do, you'd probably just use the txid as the entity id#2015-07-3114:26tcrayford(I'd have to try it at a repl, but I can't right now easily)#2015-07-3114:27raymcdermottok let me try and play over the weekend and I’ll come back here#2015-07-3114:27raymcdermottthanks for the great guidance so far1#2015-07-3114:27raymcdermotts/1/!/#2015-08-0101:22maxsomething really strange is going on#2015-08-0101:22maxI have an attribute that has cardinality one, but it has two values#2015-08-0101:22maxuser=> (d/pull db '[* {:db/cardinality [:db/ident]} {:db/valueType [:db/ident]}] :server/last-heartbeat-at)
{:db/index false, :db/valueType {:db/ident :db.type/instant}, :db/noHistory false, :db/isComponent false, :db/fulltext false, :db/cardinality {:db/ident :db.cardinality/one}, :db/doc "", :db/id 158, :db/ident :server/last-heartbeat-at}
user=> (d/q '[:find ?e ?hb :in $ ?e :where [?e :server/last-heartbeat-at ?hb]] db 17592186962108)
#{[17592186962108 #inst "2015-07-31T23:45:01.167-00:00"] [17592186962108 #inst "2015-08-01T00:45:01.195-00:00"]}
user=>
#2015-08-0113:21bkamphaus@max: we’ll follow up with diagnostics, etc. on the support ticket. If this is a high churn attribute (the name leads me to suspect that it is), we have a suspicion about what’s going on. Will confirm before going into it more.#2015-08-0113:21maxthanks @bkamphaus#2015-08-0114:57potetm@bkamphaus: I would be very interested in knowing what ya’lls suspicion is, even if it turns out that it isn’t the cause of this issue.#2015-08-0115:27robert-stuttaford@potetm, @bkamphaus ditto#2015-08-0202:31lboliveiraHello! How do I get a max value and its entity id using the same query?
(d/q '[:find [(max ?a) ...]
:where [?e :a ?a]]
[[1 :a 10]
[2 :a 20]
[3 :a 30]])
=> [30]
Returns 30. ok.
(d/q '[:find [(max ?a) ?e]
:where [?e :a ?a]]
[[1 :a 10]
[2 :a 20]
[3 :a 30]])
=> [10 1]
Returns [10 1]. How do I write a query that returns [30 3]?#2015-08-0202:55bhaganyso… I can see what it's doing, but I'm not sure how to make it not do that#2015-08-0202:56bhaganyanytime you include a non-aggregate lvar in :find, it's going to group the aggregates by that lvar#2015-08-0202:56bhaganyI think I would just issue one query for (max ?a), and a second for ?e#2015-08-0202:58bhaganyI think that's the only way because there may be more than one ?e#2015-08-0202:58bhaganyheh, I should have tagged you - @lboliveira ^^#2015-08-0203:00lboliveira@bhagany: Ty. So the idiomatic way to query the id is making two queries?#2015-08-0203:01bhagany@lboliveira: in this case, I would say yes. In general, you don't need to worry about round tripping to the database with datomic like you do with other db's#2015-08-0203:02bhaganyI'm pretty positive that ?e is already going to be in local memory because of the first query#2015-08-0203:06lboliveira@bhagany: I am wondering if I could have some issues if a new value is inserted between these calls.#2015-08-0203:07bhagany@lboliveira: ah, this is another thing you don't need to worry about with datomic simple_smile#2015-08-0203:07bhaganyif you pass the same db value in to d/q, that is guaranteed not to happen; it's immutable#2015-08-0203:08bhaganyso you can be completely assured that you're getting a consistent view of your data#2015-08-0203:08lboliveirayou are soo right. It takes time to wrap the mind about it. 😃#2015-08-0203:08lboliveirathank you soo much.#2015-08-0203:09bhagany@lboliveira: my pleasure simple_smile it took me a bit to wrap my head around it too.#2015-08-0203:09lboliveiraand the "don't worry about round trips".#2015-08-0203:09lboliveirait make all queries diferent#2015-08-0203:10lboliveiraIt is very cool way to interact with the database.#2015-08-0203:12bhaganyyes, very! there's a great talk by Rich where he goes into detail about the benefits of the datomic operational model, and this one really struck me. It's just so nice not to have to grab all the data you might need up front#2015-08-0203:14lboliveira😃#2015-08-0203:18lboliveirahttps://github.com/Yuppiechef/datomic-schema
This is to arbitrarily support extra generating options, including the new index-all? option, which flags every attribute in the schema for indexing (in line with Stuart Halloway's recommendation that you simply turn indexing on for every attribute by default).
@bhagany: Do you have any thoughts about it?#2015-08-0203:18bhagany@lboliveira: I haven't used it, but I have seen people refer positively to it here and on IRC.#2015-08-0203:19lboliveiraAnd about index all?#2015-08-0203:19bhaganypersonally, I don't have a problem with the raw schema, and I kind of like having it there as data.#2015-08-0203:19bhaganydo you mean having :db/index true on all attributes?#2015-08-0203:19lboliveiraYes. Do you do that?#2015-08-0203:20bhaganyoh, I missed your first message somehow. Yes, I do that, based on the same recommendation from Stu.#2015-08-0203:21lboliveiraThis is a "wow" thing to me.#2015-08-0203:22bhaganysimple_smile#2015-08-0203:25lboliveiraI could not find the Halloway's recommendation. Do you have a link? I have some boolean attributes. It seems odd to index them.#2015-08-0203:26bhaganyI don't have a direct link, I saw it in one of the Day of Datomic videos#2015-08-0203:26bhaganyhere: http://www.datomic.com/training.html#2015-08-0203:26lboliveiraty#2015-08-0203:26bhaganynp simple_smile#2015-08-0203:42lboliveira{:db/error :db.error/incompatible-schema-install, :entity :ping.reply/start, :attribute :db/index, :was false, :requested true}
:ping.reply/start is a :db.type/instant#2015-08-0203:43lboliveiraI was trying to add an index to it#2015-08-0204:00bhagany@lboliveira: hmm, I would guess you can't change the :db/index setting of an installed attribute?#2015-08-0204:00bhaganythis rings a bell, I bet it's partially why they started recommending that you index all attrs. It isn't too expensive, and much easier than adding it after the fact.#2015-08-0204:02lboliveira@bhagany: This post says I can set :db/index to true : https://groups.google.com/forum/#!msg/datomic/UHGf2beACog/GKHqoSig0noJ#2015-08-0204:03bhaganyaha, did you do the :db/alter thing?#2015-08-0204:06lboliveirano 😳#2015-08-0204:06bhagany😄#2016-02-0216:23timgilbertThanks @bkamphaus. I did scan the "uniqueness" page looking for this, but then I thought I thought I remembered it from one of the datomic videos, which are a little hard to grep 😉#2016-02-0216:25lucasbradstreet@bkamphaus: thanks so much. I guess I should’ve done more homework in the docs!#2016-02-0216:25bkamphaus@lucasbradstreet: no worries, I’ll admit it’s not immediately apparent that you should jump to the caching topic to answer that question simple_smile#2016-02-0216:26lucasbradstreetYeah, heh. What does “notify” mean here? "A peer updates the memory index when notified by the transactor of a transaction.”. Is that as little as a the tx-id?#2016-02-0216:29bkamphausyep. with peer logging on you’ll see a message like:
2016-01-26 14:16:43.046 INFO default datomic.peer - {:event :peer/notify-data, :msec 2, :id #uuid "56a7e23a-2e3e-41ca-bf4d-b9113aba6e41", :pid 24212, :tid 28}
#2016-02-0216:39lucasbradstreetAh, cool, I am definitely going to turn peer logging on. That’s a good trick#2016-02-0217:00sonnytodoes anyone know of a tool that will generate datomic schema from prismatic schema?#2016-02-0218:20PBCan anyone tell me of a way to query on a date range on a non-indexed attribute?#2016-02-0218:36bkamphaus@petr same query will work with or without index, although performance will differ. Do you mean a date for your own attribute or domain, or Datomic transaction time?#2016-02-0218:51lucasbradstreetIf I add an UUID attribute with db.unique/identity, that will mean that my whole entity is stored an extra time (so four times vs three), since it’s additionally stored in AVET, right?#2016-02-0219:06PBbkamphaus: it’s a date for my own attribute#2016-02-0219:06bkamphaus@lucasbradstreet: that’s partially correct, avet will be set to true for any attribute, but only that attribute/value will be indexed, not everything on the entity.#2016-02-0219:07PBI have only found datomic.api/index-range though (http://docs.datomic.com/clojure/#datomic.api/index-range)#2016-02-0219:07bkamphausentities are derived from datoms, not directly storied in their entirety in indexes.#2016-02-0219:07PBI basically want to find all entities that had a date attribute between two date-times#2016-02-0219:07bkamphaus@petr you can use standard comparison, < and > etc. in query as the default case. index-range or datoms with :avet will work if you need to page through things by time.#2016-02-0219:09PBThanks!#2016-02-0219:09bkamphaus@petr sorry re: the datoms + :avet, just remembered your condition specified not indexed. So, yes, index-range and datoms with :avet won’t work, but query will.#2016-02-0219:10lucasbradstreetOh that totally makes sense#2016-02-0219:10bkamphausit’s also fairly cheap to turn :avet - especially for a regularly sized value like a inst, long, etc. any particular motivation for keeping it off?#2016-02-0219:10PBbkamphaus: [(< :moo/some-date #inst "2015-10-14T17:30:00.953-00:00”)] ?#2016-02-0219:11lucasbradstreetSo maybe it’ll be used to lookup the eid, and then if you wanted to access a bunch of attributes on that entity, they’d be accessed via EAVT#2016-02-0219:13lucasbradstreetThanks#2016-02-0219:15PBNevermind. I just bind it ot a var#2016-02-0219:15bkamphaus@petr I would parameterize the time values myself, i.e. have ?inst1 and ?inst2 in the :in and provide values.#2016-02-0219:16PBYep, I would do too. Was just testing#2016-02-0222:00gworley3i'm trying to get cloudwatch monitoring to work but so far not having any luck. i wrote up the details of what i've done on a stackoverflow question. any suggestions of what i could do to get it working would be appreciated https://stackoverflow.com/questions/35164549/monitoring-datomic-in-cloudwatch-without-cloudformation#2016-02-0222:06bkamphaus@gworley3: I’ve only used our documented permission granularity or let it be set via the ensure-transactor process and never had any issues. i.e.,
{"Statement":
[{"Resource":"*",
"Effect":"Allow",
"Action":
["cloudwatch:PutMetricData", "cloudwatch:PutMetricDataBatch"],
"Condition":{"Bool":{"aws:SecureTransport":"true"}}}]}
#2016-02-0222:07bkamphausin general my first troubleshooting step (if the situation allows) for any AWS config that’s not working is to try and run the transactor locally with keys in the environment with pretty wide permissions, so I can get a sanity check on my settings with the complexity of role config factored out.#2016-02-0222:08bkamphausYour situation may or may not allow for a troubleshooting step like that, of course.#2016-02-0222:14bkamphaus@gworley3: also just a sanity check, you’re using Pro or Pro Starter?#2016-02-0222:14gworley3@bkamphaus: pro starter#2016-02-0222:15bkamphausok, should be fine, it’s just free that's not supported for cloudwatch metrics.#2016-02-0222:36gworley3interesting. when i look at the iam role access advisor it says nothing has tried to access cloudwatch through the role#2016-02-0222:52gworley3also, where should i expect to see them show up when it works? metrics on the ec2 box or as a separate datomic section or somewhere else?#2016-02-0223:24bkamphaus@gworley3: re: where they’ll show up, I use CloudWatch from the AWS console, from the left drop down menu there’s a “Custom Metrics” drop down where you can select “Datomic"#2016-02-0223:26gworley3ah, ok. i don't (yet) show anything like that#2016-02-0223:30bkamphausI would double check that the IAM Role that displays on the instance description in the EC2 Dashboard is the correct one, also. I just checked a working transactor IAM role and its inline policy for metrics is verbatim from the docs:
{"Statement":
[{"Resource":"*",
"Effect":"Allow",
"Action":
["cloudwatch:PutMetricData", "cloudwatch:PutMetricDataBatch"],
"Condition":{"Bool":{"aws:SecureTransport":"true"}}}]}
#2016-02-0223:31bkamphauson startup I usually see it take 5 minutes or so for metrics to show up.#2016-02-0223:34gworley3i changed the role to have this exact policy but still not seeing anything#2016-02-0300:13gworley3just thinking of other things that could interfere (or at least maybe could in my mind since I don't know the code): i'm not shipping logs to s3 and i'm running this on a box i built on aws running ubuntu 14.04 without using either cloudfront or the datomic transactor ami and i'm using cassandra as the datastore#2016-02-0315:21chadhsarchitecture question: could you start by running everything on one server instance: nginx, your clojure app + datomic peer, datomic transactor, and sql storage?#2016-02-0315:22chadhsthen grow by breaking things out… like moving storage to dynamodb#2016-02-0315:22chadhsetc etc#2016-02-0315:23bkamphaus@chadhs: for testing, initial running it would work, but we don’t provide support for datomic configs that aren’t distributed in production. The reason being that combining processes that way impacts the stability of other processes and you need storage and the transactor to run smoothly to avoid hiccups in availability.#2016-02-0315:23chadhs@bkamphaus: so at a minimum you’d want appserver, transactor, storage split#2016-02-0315:25bkamphaus@gworley3: I’m still stuck thinking if there are any other differences I can probe. I may run an end-to-end deploy/config test with the latest version for aws metric reporting to see if anything of note comes out. Apart from that, not sure what the difference could be.#2016-02-0315:26bkamphaus@chadhs: correct.#2016-02-0315:27chadhscool thnx, that helps#2016-02-0315:35timgilbertHey, quick question about the console and licensing: I'm setting up a separate staging and production environment, and I'm considering having the console running on a dedicated server somewhere via console -p 8080 staging datomic: prod datomic:#2016-02-0315:36timgilbert...so I'm wondering if doing that would wind up consuming a processor license from both staging and prod even when nobody is actively using the console, or whether it only takes up a process when someone has, say, logged into it and selected "staging" from the dropdown#2016-02-0315:37bkamphaus@timgilbert: whenever it’s running it consumes a process.#2016-02-0315:37timgilbertPart of the motivation is to allow developers to use the console without necessarily needing the full datomic install on their laptops#2016-02-0315:38timgilbertOk, thanks @bkamphaus. So if I run it as above, it will be consuming one each from staging and production as long as the process is up, correct?#2016-02-0315:39bkamphausthat’s correct (its connection to each as a peer)#2016-02-0315:39timgilbertOk, cool. Thanks again for the info.#2016-02-0318:18gworley3@bkamphaus: thanks for taking a look. i keep hoping there's something obvious that i've failed to do that would address it. fyi running version 0.9.5344#2016-02-0319:14currentoorIs there a way to get the size of the the DB?#2016-02-0319:23arohnercurrentoor: not via the API, that I’m aware of. But you can always go look at your storage directly#2016-02-0319:23currentoor@arohner: good point#2016-02-0405:29currentoorI'm seeing very slow queries and my database is not even that large. Any suggestions for how to proceed?#2016-02-0406:42currentoorIf I have an expensive query that returns 5MB of data, I can see that the first time the query is made it takes about ~3 seconds. But the second time that same query is made, shouldn't it be way faster because of caching?#2016-02-0406:43currentoorI'm wondering if I've setup something incorrectly. #2016-02-0407:18currentoorI thought perhaps the peer does not have enough memory but based on New Relic I can see that I haven’t hit the max heap size yet, so that’s probably not the cause.#2016-02-0408:46currentoorNevermind, turns out to be a different issue.#2016-02-0420:00bkamphaus@currentoor: if you revisit this again and can share the query or an obfuscated form of it, there are common issues like clause ordering, typos in variable bindings, inclusion of clauses that don’t relate and lead to cartesian product intermediate sets of tuples, etc. that result in inefficient queries (and sometimes those inefficiencies may only become glaringly obvious at scale).#2016-02-0420:02bkamphausalso note that index or log segments go into the object cache and won’t by default consume the entire heap, you can change the objectCacheMax system property (defaults to half of heap), more on that here: http://docs.datomic.com/caching.html#object-cache#2016-02-0421:07currentoor@bkamphaus: much obliged!#2016-02-0423:10ljosaDoes Datomic do okay with the transactor and storage on the other side of the country (~100 ms)? Our west coast people are trying to get started with Datomic and are reporting slowness.
From a newly started peer JVM, a query that takes 5 s within the same AWS region as the servers takes 99 s from laptops in our Portland, Oregon, office (~100 ms ping times). And d/connect takes 80 s.
It's much better for subsequent queries, as the peer starts to cache most of what it needs. But is this expected behavior, and is it network latency that is the determining factor? Or is something wrong? Should I be looking for Couchbase connection problems?#2016-02-0423:11bkamphaus@ljosa: it sounds like network latency is certainly a contributing factor and that’s not a configuration I would typically recommend. Is there also a cross-regional consistency setting (i.e. replication or something) that’s a confounding factor as well?#2016-02-0423:13ljosano, no couchbase xdcr, as it doesn't guarantee the consistency that Datomic requires. just a transactor and a couchbase cluster, both in us-east-1.#2016-02-0423:14bkamphaus@ljosa: what are your memory-index settings?#2016-02-0423:15ljosaon the transactor?#2016-02-0423:15lockdownyep, I would try couchbase direct queries first#2016-02-0423:15lockdownto make sure you can discard it#2016-02-0423:16ljosamemory-index-max=512m
memory-index-threshold=32m
object-cache-max=128m
#2016-02-0423:16bkamphausok, that looks reasonable.#2016-02-0423:18bkamphausreason I ask re these two things is (1) really common issue with sudden latency spikes of users on e.g. Cassandra is cross-datacenter consistency/replication, have seen two orders of magnitude jump in latency out of that (2) peers have to accommodate memory index (and read all log/memory index segments into memory) with the initial call to connect, so that could be a contributing factor where even a small amount of latency could have a big impact.#2016-02-0423:19ljosais the peer able to pipeline its couchbase reads, or is there a lot of read-wait-read?#2016-02-0423:24ljosaI did some couchbase testing from my house in Massachusetts. Ping times around 25 ms. Connecting takes a few seconds. The query that they used in Oregon takes 30 s. Also tested directly with Couchbase, and things look reasonable: 200 ms to create cluster, 930 ms to open bucket, 30 ms to read a small document. No errors from Datomic or Couchbase.#2016-02-0423:29bkamphaus@ljosa: it’s certainly true that (especially with the cross-country latency contributing) a warm query will be significantly faster as it won’t be retrieving segments from storage. If the entire database or most frequently accessed segments can be held in the object cache on the peer, performance should be fine after the warm up period.#2016-02-0423:29bkamphausdo you have peer logging enabled?#2016-02-0423:31bkamphausthe concurrency of peer reads can be adjusted, also: http://docs.datomic.com/system-properties.html#peer-properties#2016-02-0423:31ljosayes, after my 30 s query I get the first metrics: [Datomic Metrics Reporter] INFO datomic.process-monitor - {:tid 22, :AvailableMB 2590.0, :StorageGetMsec {:lo 26, :hi 389, :sum 33313, :count 857}, :pid 37440, :event :metrics, :ObjectCache {:lo 0, :hi 1, :sum 75, :count 944}, :LogIngestMsec {:lo 0, :hi 601, :sum 601, :count 2}, :MetricsReport {:lo 1, :hi 1, :sum 1, :count 1}, :DbAddFulltextMsec {:lo 0, :hi 29, :sum 29, :count 2}, :PodGetMsec {:lo 54, :hi 76, :sum 186, :count 3}, :LogIngestBytes {:lo 0, :hi 3581246, :sum 3581246, :count 2}, :StorageGetBytes {:lo 67, :hi 48478, :sum 10179767, :count 857}}#2016-02-0423:33bkamphaushm, the average StorageGetMsec time for the peer doesn’t seem notably slow from the Datomic peer view, (39 msec average)#2016-02-0423:34ljosaI'm going to try to increase concurrency and see if it changes.#2016-02-0423:36bkamphausthe same query is an order of magnitude increase? I would only expect that from latency if e.g. the StorageGetMsec time is extremely fast (i.e. an order of magnitude lower if we’re talking 3 vs. 30 sec), though this assumes storage reads dominate.#2016-02-0423:37bkamphauscold and hot query comparisons, system configs identical re: heap and object-cache size? (i.e not cross a memory threshold for intermediate representation on differently configured systems?)#2016-02-0423:37ljosa-Ddatomic.readConcurrency=10 didn't change anything.#2016-02-0423:38ljosasame query, in lein repl on identical laptops. No -Xmx#2016-02-0423:40ljosaThe query takes 5.3 s from an AWS instance in the east. Metrics: {:tid 19, :PeerAcceptNewMsec {:lo 1, :hi 1, :sum 1, :count 1}, :AvailableMB 1200.0, :StorageGetMsec {:lo 0, :hi 5, :sum 444, :count 846}, :pid 12134, :event :metrics, :ObjectCache {:lo 0, :hi 1, :sum 81, :count 936}, :LogIngestMsec {:lo 1, :hi 619, :sum 620, :count 2}, :MetricsReport {:lo 1, :hi 1, :sum 1, :count 1}, :PeerFulltextBatch {:lo 1, :hi 1, :sum 1, :count 1}, :DbAddFulltextMsec {:lo 0, :hi 35, :sum 35, :count 2}, :PodGetMsec {:lo 12, :hi 31, :sum 71, :count 3}, :LogIngestBytes {:lo 0, :hi 5165426, :sum 5165426, :count 2}, :StorageGetBytes {:lo 67, :hi 48478, :sum 10071059, :count 846}}#2016-02-0423:43bkamphauswow, StorageGetMsec average is 0.52 msec, vs. 39 msec in the other example, so I’d say that could certainly account for the difference (very good fit actually to 5.3 second versus 30 second ratio).#2016-02-0423:46ljosaI tried -Ddatomic.readConcurrency=1000 also, without much effect. (Well, it went from 30.8 s to 28.8 s, not sure if I just got lucky.)#2016-02-0423:47bkamphausmay just be luck, I think the latency is the bottleneck. The storage retrieval component of the query just being masked by the extremely fast storage access in the primary config.#2016-02-0423:47ljosaDo you have other tricks that may speed up the connect and first query? Or do our people in Oregon just have to get used to long startup times? (This is for dev work and ad-hoc analysis; we don't have Datomic peers on production servers in the west.)#2016-02-0423:50bkamphausthe usual answer for reducing latency in population the object cache is memcached ( http://docs.datomic.com/caching.html#memcached ) but not sure you’ll want to configure it for the dev work and ad-hoc analysis situation you describe. I’m not sure where the costs with the queries are being made perf wise.#2016-02-0423:52bkamphausi.e. if it’s intermediate reps and joins, narrowing, etc. or if your clauses match a ton of results that have to be then passed on. You could throw up a REST server to return query results for ad hoc analysis and submit queries to the endpoints, that way the peer stays warm, though I’m not sure that would save you much trouble if you’re getting really large result sets.#2016-02-0423:52ljosadoes such a memcached have to be reachable by both the transactor and the peer?#2016-02-0423:53bkamphaussome of the costly queries may be able to be tuned via clause re-ordering, or strategies for handling time/tx provenance if those are a component?#2016-02-0423:54bkamphausdifferent Datomic processes can use a different memcached#2016-02-0423:56ljosaso a developer could have a memcached on his laptop without the transactor needing to be configured with memcached as well?#2016-02-0423:58ljosathe query itself is just a simple join and pulling four attributes on ~1285 joined pairs of entities: (count (d/q '[:find
(pull ?c [:c/d :c/i])
(pull ?b [:b/n :b/x])
:where
[?c :c/e true]
[?b :b/c ?c]] db))
=> 1285#2016-02-0500:00bkamphaus:b/c is card one or many?#2016-02-0500:00ljosaone#2016-02-0500:02ljosa:c/d and :c/i contain short strings; b/n and n/x are floats.#2016-02-0500:13ljosawhoa! the memcached solution reduced the cold query time from my house (25 ms ping time) from ~30 s to 2.2 s. I think we have our solution!#2016-02-0500:14bkamphauscool, good to hear. I wonder if there’s a cost in the structure of that pull that’s non-obvious. I’m doing testing against a larger mbrainz than the sample we provide, I see several orders of magnitude bump in perf to put in the second pull statement, I’ll discuss that with the dev team, though, too.#2016-02-0500:16bkamphausactually never mind, that time is only introduced when I have a typo in one of the pulled attributes, interesting.#2016-02-0500:16ljosathanks, we'll keep that in mind and see if we notice differences with two-pull queries.#2016-02-0500:16ljosaah simple_smile#2016-02-0500:16bkamphaussorry thinking aloud simple_smile#2016-02-0500:16ljosathank you for your help!#2016-02-0500:21bkamphausyeah, I’m not sure, I see < 150 msec w/local postgres storage for this query (larger mbrainz than public) with 10,340 count:
(time
(count
(d/q '[:find (pull ?t [:track/name :track/release]) (pull ?a [:artist/sortName :artist/startYear])
:where
[?a :artist/name "Pink Floyd"]
[?t :track/artists ?a]]
(d/db conn))))
#2016-02-0500:21bkamphausanyways, glad the memcached option seems to be helping! simple_smile#2016-02-0500:24bkamphaus~500 msec with reverse ref in first pull instead of typo 😛 (again 10,340 total results)
(time
(count
(d/q '[:find (pull ?t [:track/name :medium/_tracks]) (pull ?a [:artist/sortName :artist/startYear])
:where
[?a :artist/name "Pink Floyd"]
[?t :track/artists ?a]]
(d/db conn))))
#2016-02-0510:22nha@sonnyto: you could maybe have a look at https://github.com/cddr/integrity#integritydatomic#2016-02-0518:42currentoorBased on this stack overflow post I understand how I can get updated-at values using the history db.
https://stackoverflow.com/questions/24645758/has-entities-in-datomic-metadata-like-creation-and-update-time
But for performance I wanted to retrieve these timestamps together and part of another query. So is that possible? And is this the correct way to do it?
(d/q '[:find (pull ?a structure) ?created-at (max ?updated-at)
:in $ structure
:where
[?a :action/status "foo"]
[?a :action/id _ ?id-tx]
[?id-tx :db/txInstant ?created-at]
[?a _ _ ?all-tx]
[?all-tx :db/txInstant ?updated-at]
]
(d/db conn)
ent/ActionStructure)
#2016-02-0518:43currentoorAssuming :action/id is a unique attribute that is only set when the entity is created.#2016-02-0518:51stuartsierra@currentoor: "for performance I wanted to retrieve these timestamps together and part of another query"
There is usually no need to combine queries for performance reasons.#2016-02-0518:52stuartsierraSmaller, simpler queries usually perform better than large, complex queries.#2016-02-0518:53currentoorYeah I can totally see where you're coming from @stuartsierra but for this specific use-case I'm fetching about 1000 entities from the DB then mapping over them to get their created-at updated-at timestamps. The timestamp loop makes up about have my total execution time.#2016-02-0518:54currentoorIndividually these created-at updated-at queries are negligible but in aggregate they take a significant amount of time.#2016-02-0518:55currentoorDo you think they would still take just as long if I put them inside the larger query?#2016-02-0518:56stuartsierra@currentoor: As with any performance question, measure first. But I would not expect the combined queries to perform any better than separate queries.#2016-02-0518:58stuartsierraI would look at the size of the ?updated-at query results. If you have many transactions updating each entity, that could account for some of the cost of the query.#2016-02-0519:29currentoorHmm. So I know this is hearsay but I'm getting pressured to store created-at updated-at attributes directly on the entity, just like other DBs. I know this is re-inventing stuff but what about performance, do you suspect this would be faster than using Datomic's built in time facilities?#2016-02-0520:42stuartsierra@currentoor: As always, test and measure. Make sure you have realistic-sized data to test.#2016-02-0521:25currentoorWill do, thanks.#2016-02-0522:36currentoorI'm having getting a set of tx-times with this query.
(defn timestamps [db lookup-refs]
(d/q '[:find (min ?tx-time) (max ?tx-time)
:in $ [?eid ...]
:where
[?eid _ _ ?tx _]
[?tx :db/txInstant ?tx-time]]
(d/history db)
lookup-refs))
I'm passing in four lookup-refs so I would expect the result to be four tuples, one for each of the lookup-refs. But instead I get this.
[[#inst "2016-02-05T22:22:31.085-00:00" #inst "2016-02-05T22:31:29.292-00:00"]]
#2016-02-0522:38currentoorCan a query be used to take in a collection and return a collection in the same ordering?#2016-02-0522:38currentoorOh I get, uniqueness is the issue. This works.#2016-02-0522:39currentoor(defn timestamps [db lookup-refs]
(d/q '[:find ?id (min ?tx-time) (max ?tx-time)
:in $ [?eid ...]
:where
[?eid _ _ ?tx _]
[?eid :action/id ?id ?tx _]
[?tx :db/txInstant ?tx-time]]
(d/history db)
lookup-refs))
#2016-02-0606:50robert-stuttaford@currentoor: there’s also the :with clause in Datalog query#2016-02-0606:51robert-stuttafordbtw, the first datalog pattern in your timestamps query [?eid _ _ ?tx _] is made redundant by the second#2016-02-0608:39robert-stuttaford@bkamphaus: what is the maximum size a Datomic database can reach? i vaguely remember Stu either talking about or writing about this somewhere but i can’t find it. i know 1 billion datoms is possible. what’s the total ‘address space’?#2016-02-0611:44tcrayford@robert-stuttaford: ~10 billion datoms is the problem point. Not an address space thing, but problematic#2016-02-0611:44tcrayford@robert-stuttaford: also note that you can have at most ~20k idents in the db, because every ident is in memory in every peer/transactor#2016-02-0612:42robert-stuttafordthanks @tcrayford ! what makes 10b datoms a problem? can you direct me to something to read or watch?#2016-02-0614:39bkamphaus@robert-stuttaford: Stu's answer on this thread elaborates a little more: https://groups.google.com/forum/m/#!topic/datomic/iZHvQfamirI -- it's a practical limit and the value is a rough rule of thumb. the database still functions, but probably not with acceptable performance characteristics especially if the transaction volume would reach that size limit quickly for any given use case.#2016-02-0616:19robert-stuttafordthanks ben!#2016-02-0616:19robert-stuttafordsuper valuable info#2016-02-0616:55meowWhat is an ident of which there can be at most 20k? I'd like to understand this limit.#2016-02-0617:23bkamphausthe in-memory aspect of idents is documented here: http://docs.datomic.com/identity.html#idents#2016-02-0617:32meow@bkamphaus: Thank you for that link.#2016-02-0617:36meowSo is it fair to say that the ident limitation is primarily felt with more complex schemas?#2016-02-0617:37meowIf so, what is the impact of schema evolution?#2016-02-0617:40bkamphaus@meow: I’m not familiar with anyone running up against practical limits with ident count, though I imagine it would have an impact if you had e.g. generated or flexible tagging that users provided (if you anticipated thousands and thousands of that sort of tag, I would say switch to a unique/identity keyword or string attribute of your own.#2016-02-0617:40bkamphausthere’s also a limit on on schema elements but it’s pretty high, 2^20 http://docs.datomic.com/schema.html#schema-limits#2016-02-0617:42meowBraid has open-ended tagging of conversations.#2016-02-0617:43meowWe will hit those limits.#2016-02-0617:43meowIs there a performance penalty to the unique/identity keyword or string attribute of our own.#2016-02-0617:44meowAnd can you address the impact of schema evolution?#2016-02-0617:48bkamphausident is more performant but carries more memory overhead (pre-loaded). With your own unique attr on ref’d entity vs. ident you pay cost for retrieving segments and require warm cache etc. (three rough orders of magnitude to get segment from storage, memcached, object cache).#2016-02-0617:49meowThat is unfortunate.#2016-02-0617:50bkamphausif by schema evolution you mean how to make the change, you can find every one of those enums and give it an identical attr/val keyword name for what the ident was, leave the entity intact.#2016-02-0617:51bkamphausbut obviously pull, query, etc. and automagic around identy/eid translation is lost and requires more verbose lookup ref.#2016-02-0617:52meowBy schema evolution I mean the addition and/or removal of enitity attributes over time as the database design changes in a production environment along with the issues of migration of existing entities and how that works in datomic given that it is immutable.#2016-02-0617:53bkamphausI want to double check on that 20k limit, not sure if calculated or from a rule of thumb Stu or someone provided i.e. on a video. I do know that we caution people against too many idents but I’m not familiar with that specific boundary, @tcrayford if you don’t mind my quick follow question, can you refer me to the source for the 20k ident limit?#2016-02-0617:53bkamphaus@meow: not immutable over time, i.e. you can retract idents, assert them on other attributes, etc. But for testing, staging, etc. a lot of times you’re using the database itself as a test then migrating the portion of the schema/data you prefer to keep.#2016-02-0617:54meowWe always migrate the production instance of Braid.#2016-02-0617:55meowWe have the full history.#2016-02-0617:55bkamphausthe “present” database t/snapshot is the efficient one I mean, as in: http://docs.datomic.com/filters.html#usage-considerations#2016-02-0617:55bkamphaus“queries about "now" are as efficient as possible–they do not consider history and pay no penalty for history, no matter how much history is stored in the system."#2016-02-0617:58meowWhat schema is used when I query for something that happened yesterday. Is it yesterday's schema or today's schema, assuming the schema was changed?#2016-02-0618:01meowBraid is an online group chat application with groups and tags, and no limits on either.#2016-02-0618:02meowAnd the schema is evolving daily.#2016-02-0618:02meowAnd we have a production instance running since day 1.#2016-02-0618:03meowI use it every day.#2016-02-0618:03bkamphaus@meow answers to many of your questions are covered here: http://docs.datomic.com/schema.html#Schema-Alteration — however, an ident is not a schema element intrinsically (i.e. your own enums not in :db.part/db and an entity having an ident now or in the past doesn’t introduce the kind of complications you get from e.g. relaxing then trying to re-assert a unique constraint#2016-02-0618:03meowI understand that aspect.#2016-02-0618:04meow"Thus traveling back in time does not take the working schema back in time, as the infrastructure to support it may no longer exist. Many alterations are backwards compatible - any nuances are detailed separately below."#2016-02-0618:05meowThat was the answer I was looking for.#2016-02-0618:06meowI wrote Schevo in Python. Schevo was for "schema evolution". It was similar to datomic but OO.#2016-02-0618:07bkamphausI have to step away for a while, I’ll check in on the 20k limit re: idents Monday AM with the dev team. I’ll let you know how precise that limit is or if there are tradeoffs you can make (i.e. if you can keep running it up if it’s an important enough aspect of the architecture and you can accommodate via schema provisioning, cache settings, etc.).#2016-02-0618:07meowThank you for all your help.#2016-02-0618:08bkamphauss/schema provisioning/machine provisioning#2016-02-0618:09meowWe could also take a federated approach to scaling.#2016-02-0618:10meow@jamesnvc: @rafd @crocket See above for details on datomic limitations. ^#2016-02-0618:13jamesnvcIf I understand correctly, the ident limit is with regards to :db/ident things?#2016-02-0618:14jamesnvctags in braid are just strings that we do look-up on, so the schema shouldn’t actually be growing#2016-02-0618:15jamesnvc(this would be relevant for another project @rafd and I have worked on though)#2016-02-0618:16bkamphaus@jamesnvc: yes this is only about the count of entities that have :db/ident and the impact on memory, I’m trying to source the practical limit that was quoted here as I’m not familiar with it, but the softer principle of limiting the total number of things with idents because you always pay their memory overhead should be a modeling consideration.#2016-02-0618:16jamesnvcyeah, that makes sense#2016-02-0618:37currentoor@robert-stuttaford: thanks!#2016-02-0619:04stuartsierraI would apply the same guideline for Datomic Idents that I use for Keywords in Clojure applications: do not use Keywords for anything user-generated.#2016-02-0623:48tcrayford@bkamphaus: pretty sure I was wrong and the limit is just 2^20#2016-02-0701:41bkamphausAh, ok.#2016-02-0702:20crocketDatomic free vs datomic pro#2016-02-0702:20crocket@meow: I was referring to the limitations of datomic free.#2016-02-0703:55meow@crocket This was not in response to your question. This was my own question about a different limitation.#2016-02-0721:46kschraderis this the correct way to remove an index:#2016-02-0723:53bkamphaus@kschrader: should be an ok approach except for the case where the attribute is unique (i.e. being unique is sufficient to keep the index, you’d also have to additionally drop the uniqueness constraint to drop the index)., i.e. (example from docs - last section: http://docs.datomic.com/schema.html#schema-alteration )
[[:db/retract :person/external-id :db/unique :db.unique/identity]
[:db/retract :person/external-id :db/index true]
[:db/add :db.part/db :db.alter/attribute :person/external-id]]
#2016-02-0800:35kschradergot it thanks#2016-02-0800:35kschrader@bkamphaus: is there any way to know how much memory an index will take up?#2016-02-0810:25pesterhazyI'm seeing Transaction error clojure.lang.ExceptionInfo: :db.error/transactor-unavailable Transactor not available {:db/error :db.error/transactor-unavailable} pretty regularly#2016-02-0810:26pesterhazyit always recovers but this is a bit worrying. (This is using AWS, official AMIs with dynamo)#2016-02-0810:26pesterhazycould this be related to GC pauses in the peer?#2016-02-0811:48dm3yes, that could trigger it#2016-02-0811:48dm3same way as a broken network#2016-02-0813:36pesterhazythe GC pauses we see are only 5 seconds, though -- would that be sufficient?#2016-02-0813:37pesterhazynot that 5 second GC pauses aren't indicative of a problem in our code simple_smile#2016-02-0813:56dm3is there a timeout parameter of some sorts?#2016-02-0814:00bkamphaus@pesterhazy also large transactions on the peer or indexing not keeping up on transactor. gc pause of just 5 seconds could possibly impact it if times poorly or in quick succession.#2016-02-0814:02pesterhazythis peer is processing hardly any transactions#2016-02-0814:02bkamphausTimeout tolerance can be set by upping transactor heartbeat (Datomic level), or on peer changing datomic.peerConnectionTTLMsec to be higher (HornetQ level)#2016-02-0814:11bkamphaus@pesterhazy: if the peer isn’t processing many transactions, and the transactor (verified from metrics or logs) is heartbeating fine and not reporting alarms, peer GC is most likely culprit. If you’re not using non-default JVM GC settings on peer app, you could adopt some similar to those on transactor if goal is to avoid pauses. Or tolerate the GC by upping one or both of the settings mentioned above.#2016-02-0814:12pesterhazylike stu halloway says in his debugging talk, the culprit is always the GC#2016-02-0814:13pesterhazylooking at the transactor metrics, the heartbeating looks fine#2016-02-0814:13pesterhazyI guess there's no way around finding where those GC pauses are coming from#2016-02-0814:13pesterhazythanks for your help!#2016-02-0907:36onetomim trying to find a most minimal example of fast in-memory datomic db tests which doesn't require a global connection object and uses d/with.
i found http://yellerapp.com/posts/2014-05-07-testing-datomic.html but it doesn't share what does that (empty-db) function does to avoid recreating the db and reconnecting to it.#2016-02-0907:36onetomi found https://gist.github.com/vvvvalvalval/9330ac436a8cc1424da1 too but it seems a bit harsh and doesn't show what is solution is it comparing to.#2016-02-0907:41onetomah, i see vvvvalvalval has a recent article on this topic https://vvvvalvalval.github.io/posts/2016-01-03-architecture-datomic-branching-reality.html#2016-02-0908:25pesterhazy@onetom, interesting article!#2016-02-0908:36pesterhazydo I read the article correctly: you can use d/with to do multiple "transactions" one after another where the second one builds on the first?#2016-02-0908:51pesterhazyso you can basically completely emulate, or "fork", a connection#2016-02-0909:00onetomyup, that's the idea#2016-02-0910:15robert-stuttafordyou totally can#2016-02-0910:16robert-stuttafordwe do this with great success#2016-02-0910:16robert-stuttafordyou do need to shepherd the temp ids from prior d/with’s to later ones if you mean to ultimately transact something for real#2016-02-0910:20robert-stuttafordmake tx -> d/with. query with db, make another tx (now using ids that look like ones in storage but actually just came from d/wtih) -> d/with. repeat N times. actually commit final tx to storage which includes all the intermediate txes together, after swapping out the d/with “real” ids for the tempids again, so that the final tx has all the right real and temp ids.#2016-02-0910:20robert-stuttafordi have code for this if anyone wants#2016-02-0910:25robert-stuttafordwe use it here: http://www.stuttaford.me/2016/01/15/how-cognician-uses-onyx/#2016-02-0910:25pesterhazyinteresting#2016-02-0910:26robert-stuttafordmultiple onyx tasks each doing their own work, but each building on the data of the previous one. only actually goes into storage at the end.#2016-02-0910:26pesterhazyin my use case, I'm not planning to actually "really" commit anything#2016-02-0910:26robert-stuttafordthey each use d/with and return tx data#2016-02-0910:26pesterhazycurious, why would you want to commit at the end?#2016-02-0910:27robert-stuttafordusing the onyx-datomic commit-bulk-tx plugin#2016-02-0910:27robert-stuttafordyou’ll see if you scan my post#2016-02-0913:26pesterhazy@robert-stuttaford: will do, thanks#2016-02-0914:50bkamphausDatomic 0.9.5350 is now available https://groups.google.com/d/msg/datomic/TIGnE3Dtjgs/PEAWEQdcEgAJ#2016-02-0915:11jgdavey@bkamphaus: Can you elaborate on this bullet:
* Improvement: connection caching behavior has been changed so that peers can
now connect to the same database served by two (or more) different
transactors.#2016-02-0915:12jgdaveyMore than one transactor can serve a single datomic database?#2016-02-0915:22marshall@jgdavey: That bullet specifically deals with peers connecting to multiple databases than originated from the same call to create-database. I.e. if you have a staging database that is restored locally (dev) from a backup of a production database on some other storage, you can now launch a single JVM peer that can connect to both the staging and the production instance.#2016-02-0915:51jgdaveyJust to make sure I’m understanding correctly: is the connection caching now based on URI and database id?#2016-02-0915:58bkamphaus@jgdavey: aspects of the connection+storage config, but caching in that respect is just an implementation detail. The contract-level from this release forward is that two different transactors, one serving a database and the other a restored copy of that database in a different storage, can be reached from the same peer.#2016-02-0916:00jgdaveyThat makes sense. Whereas before, peers wouldn’t be able to simultaneously connect to a db and a restored copy of it on another transactor.#2016-02-0916:00jgdaveyAnd/or the behavior was undefined/unsupported#2016-02-0916:00jgdaveyNot trying to beat a dead horse, just want to make sure I understand simple_smile#2016-02-0916:42kschrader@jgdavey: before if you tried to establish a second connection it would stay connected to the first DB#2016-02-0916:42kschradersilently#2016-02-0916:42kschraderassuming that I’m understanding this change correctly, this fixes that#2016-02-0916:45kschraderif you did (def prod-conn (d/connect PROD_URI))#2016-02-0916:45kschraderand then (def local-copy-of-prod (d/connect LOCAL_COPY_URI))#2016-02-0916:45kschraderin a REPL#2016-02-0916:46kschraderlocal-copy-of-prod would actually be pointing at PROD_URI#2016-02-0916:46kschraderwhich was bad#2016-02-0919:21pesterhazyyeah I'm happy that's getting fixed#2016-02-0919:34jgdaveyThank you everyone for the clarification. simple_smile#2016-02-1007:01timothypratleyAre there any command line tools for importing TSV files into Datomic? (Assuming an existing schema, just want to transact in new facts, ideally with a low startup time cost)#2016-02-1007:44val_waeselynck@onetom happy to share more details about how we do testing by forking connections if the blog post is not enough simple_smile#2016-02-1007:44val_waeselynckI may release a sample application or Leiningen template at some point#2016-02-1007:45onetom@val_waeselynck: that would be really great!#2016-02-1007:45onetomi tried your mock connection and it works so far#2016-02-1007:46onetomi was using this function to create an in-memory db with schema to serve as a starting point for forking in tests:
(defn new-conn
([] (new-conn db-uri schema))
([uri schema]
(d/delete-database db-uri)
(d/create-database db-uri)
(let [conn (d/connect db-uri)]
@(d/transact conn schema)
conn)))
#2016-02-1007:47onetomi guess your empty-db fn is doing something similar#2016-02-1007:49onetomhave you released this mock connection as a lib anywhere yet?
if it served you well so far, it would make sense to create a lib, no?#2016-02-1007:50onetomactually i would expect cognitect to supply such a solution out of the box if it is a really sound approach as @robert-stuttaford hinted above#2016-02-1007:50val_waeselynck@onetom: yes I'll probablye roll out a lib soon, just wanted to get some criticism first#2016-02-1007:51onetomok, here is my criticism: why is it not on clojars yet!? ;D#2016-02-1007:51val_waeselynckMy next blog post will be a guided tour of our architecture, so it'll probably cover this in more details#2016-02-1007:51onetomhappy to hear!#2016-02-1007:51val_waeselynckAnd I wouldn't be surprised if this was actually the implementation of Datomic in-memory connections simple_smile#2016-02-1007:52onetomyet it takes longer to just create/delete in-mem dbs#2016-02-1007:52onetomdo u think it's just the overhead of specifically transacting the schema?#2016-02-1007:53val_waeselynckperformance is not the biggest win IMO, being able to fork from anything is#2016-02-1007:53val_waeselynckincluding your production database, I do it all the time#2016-02-1007:53onetomthat's what i was missing from your article. you haven't established a baseline which your are comparing your solution to, so im not sure what would be the alternative approach and how much faster is it to use the mock connection#2016-02-1007:55onetomthat sounds a bit risky to work w the production fork, no?
i always work on restored backups, but our db takes only a few seconds to restore still, so that's why it's viable atm#2016-02-1007:55pesterhazy@val_waeselynck: your article inspirational, will def try that for us as well#2016-02-1007:55val_waeselynckwhy risky ? once you fork, it's basically imossible to write to the production conection#2016-02-1007:56val_waeselynck(well , granted, the risk is that you forget to fork :p)#2016-02-1007:57onetomthat's what i meant simple_smile#2016-02-1007:57val_waeselynck@pesterhazy: thank you simple_smile this encourages me to roll out a lib then#2016-02-1007:57pesterhazythat would be very useful I think#2016-02-1007:58pesterhazyjust the mock connection itself would be great as a lib#2016-02-1007:58val_waeselynck@onetom: anyway, I'm generally not too worried about accidental writes with Datomic, they're pretty easy to undo#2016-02-1007:59onetom@val_waeselynck: your test example is the most heartwarming thing i've seen in a long time
that's how i always hoped to describe integration tests and now you made it a reality by putting the dot on the I (where I = datomic simple_smile)#2016-02-1008:01pesterhazynow if someone could build a better deployment strategy for datomic on AWS with live logging, that'd be great too (I just had the prod transactor fail to come up twice, without a way to find out what the problem was; only to work the third time, for no apparent reason)#2016-02-1008:01onetom@val_waeselynck: are you using any datomic wrapper framework, like http://docs.caudate.me/adi/ or something similar?#2016-02-1008:02val_waeselynck@onetom: no, never heard of such a framework 😕#2016-02-1008:02val_waeselynckquite happy with datomic's api (except for schema definitions)#2016-02-1008:03onetomwell, that's one of the obvious areas where some framework could help#2016-02-1008:04onetombut then migrations become tricky if u have a source file representing your schema, since the DB itself is not the single place of truth anymore#2016-02-1008:05onetombut i read your article about conformity, so i will try that approach soon#2016-02-1008:10val_waeselynck@onetom @pesterhazy I gotta run but happy to discuss this further, actually it would be really nice if you could persist your main questions and critics as comments of the blog post, so others can benefit from it :)#2016-02-1008:39robert-stuttaford@pesterhazy: that logs rotate from the transactor rather than stream is problematic for me too. it’s made logs totally useless for every instance that our transactors failed in some way#2016-02-1008:41caspercSo is it just me or does the Datomic client just never return when submitting a malformed query?#2016-02-1008:42caspercLike this one:
(d/q '[:find (pull ?be [*])
:where $ ?id
:where
[?be :building/building-id ?id]]
(d/db @conn)
2370256)#2016-02-1008:43casperc(with two :where clauses)#2016-02-1008:48casperccurrently the process is using a lot of CPU, so apparently it is doing something#2016-02-1008:58onetom@casperc: this doesn't hang for me:
(defn idents [db]
(q '[:find ?eid ?a
:where $
:where
[?eid :db/ident ?a]] db))
(->> (new-conn) db idents pprint)
#2016-02-1008:59onetombut it doesn't have a 2nd param either; let me try that#2016-02-1009:00onetomthat still works and no cpu load#2016-02-1009:01onetomim on [com.datomic/datomic-free "0.9.5344"]#2016-02-1009:02pesterhazy@robert-stuttaford: exactly. you have logs, but only the next day and only in case nothing goes wrong (which is precisely the case where you're not particularly interested in the logs)#2016-02-1009:02pesterhazyit'd be already helpful to be able to specify a logback.xml so you can set up your own logging#2016-02-1009:03robert-stuttafordyep#2016-02-1009:03robert-stuttafordwe use http://papertrailapp.com and it’d be great to use logback’s syslog appender with that#2016-11-1619:08stuartsierra@pesterhazy Yes, that's a known limitation. Reversed attributes aren't supported in transactions. I don't know if/when it will be.#2016-11-1619:09stuartsierra@mbutler Yes, any kind of uniqueness implies indexing values.#2016-11-1619:09Matt ButlerAwesome thanks 🙂#2016-11-1619:27jonpitherHi - do you have any resources on understanding Datomics relationship with Dynamo write capacity?#2016-11-1619:27jonpithercurrently getting com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException in the transactor - would want to avoid this#2016-11-1619:28jonpitherI can up the provisioned write capacity - but would like to understand how Datomic does it's writes, i.e. presumably it's a transaction per unit?#2016-11-1619:31jonpitheror one "write capacity unit" per datom - think it could be this?#2016-11-1621:06marshall@jonpither Datomic’s use of DDB writes doesn’t correlate exactly to transactions or datoms
For every transaction, Datomic will write durably to the transaction log in DDB, but the transactor also writes a heartbeat to storage and, most importantly, will write large amounts of data during indexing jobs.#2016-11-1621:06marshallBecause of this, you need to provision ddb throughput based on the need during indexing jobs, not ongoing transactional load#2016-11-1621:07jonpitherGreat, thanks @marshall #2016-11-1621:07marshalla bit more info can also be found here http://docs.datomic.com/capacity.html#dynamodb#2016-11-1621:09jonpitherGreat my next Q is answered there about capturing the throttles#2016-11-1713:31PBSo I know everybody has their own version of this… But I have just “finished" my first iteration of my “datomic helpers" library. Any feedback would be greatly appreciated: https://github.com/petergarbers/molecule#2016-11-1715:23Matt ButlerI want to do a case insensitive search on a string value, from googling I’ve gathered its possible but haven’t found any info on how to implement it, any pointers? 🙂#2016-11-1715:26PB@mbutler: [(.equalsIgnoreCase ^String ?db-val ?val)]#2016-11-1715:28Matt Butlerah awesome @petr , I seem to remember something about needing to require a function into the file if it wasn't in clj.core that the case here?#2016-11-1715:28PBNot as far as I know#2016-11-1715:28PBOr not anywhere I’ve done it#2016-11-1715:29Matt ButlerOk cool, well thanks again 👍#2016-11-1716:01timgilbertHi all... I've figured out how to use reified transactions such that I'm attaching a :request/person ref to each transaction pointing to the logged-in user, but I'm a little baffled by how I query the data to get the reified data out. Anyone have examples of that?#2016-11-1716:01Alex Miller (Clojure team)@mbutler that’s a function on java.lang.String. all java.lang classes are imported by default.#2016-11-1716:35marshall@timgilbert This blog post: http://blog.datomic.com/2016/08/log-api-for-memory-databases.html discusses how to use the log API to pull out data about transactions#2016-11-1716:35marshallI’d also suggest the reified transactions video here: http://www.datomic.com/videos.html#2016-11-1716:39timgilbertCool, thanks @marshall, that looks helpful. I did watch that video some time back and thought it was really good, but I've found it difficult to grep through 😉#2016-11-1716:39marshallthere’s your unicorn company idea - grep for videos#2016-11-1718:26jarppe@timgilbert I was just wondering the same. I made this to test just that: https://gist.github.com/jarppe/7a7b3234b6ce0b704df8046c67aad988#2016-11-1718:26jarppehope it helps#2016-11-1719:07timgilbertThanks @jarppe, that is helpful#2016-11-1809:39robert-stuttafordhey @marshall and @jaret, just a quick FYI that @geoffk is on Cognician’s infrastructure team and may have some questions around transactors at some point 🙂#2016-11-1813:59jaret@robert-stuttaford sounds good. He can drag us into a private chat or we can arrange a call. Let us know what works best.#2016-11-1923:08zaneWhat are best practices regarding pagination?#2016-11-1923:08zaneI'm searching the Google Group and there doesn't appear to be much consensus.#2016-11-2112:22tengI have a 2000 lines long script with datoms that I read into the database (slurp + transact). Is there a way to get better error messages, for example telling me which fact or line in the file that contains the error (I know what the problem is but need to scope it down)? Now I just get:
CompilerException java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: :db.error/tempid-not-an-entity tempid used only as value in transaction#2016-11-2113:02gravI get Critical failure, cannot continue: Heartbeat failed when doing an restore to an empty datomic:dev database. What to do?#2016-11-2113:23val_waeselynck@teng not really a direct answer, but maybe you can transact speculatively (datomic.api/with) only small segments on your file to know where the problem is ?#2016-11-2113:26teng@val_waeselynck I found the error by commenting out parts of the script and ran the script again and again. It works, but a better error message would be preferable.#2016-11-2115:12Matt ButlerWhen retracting an entity what is the best practice for finding out if that entity existed/was retracted. Should you query the :db-before and :db-after or is it ok to interpret it based on the :tx-data (are there datums present that suggest the removal of an entity)?#2016-11-2115:23val_waeselynck@mbutler yes, that's the 5th element of a Datom#2016-11-2115:24val_waeselynckhttp://docs.datomic.com/javadoc/datomic/Datom.html#added--#2016-11-2115:25marshall@grav do you have more details of your failure? what OS, what is the restore command you’re running? Any exceptions or errors in the transactor log?#2016-11-2115:26Matt Butler@val_waeselynck yes, so you could look for a datum in the tx-data that says that some attribute on the entity you want removing (probably the one you did the lookup using) has a 5 element of false#2016-11-2115:27val_waeselynck@mbutler yes#2016-11-2115:28Matt Butlercool cool 🙂#2016-11-2115:28grav@marshall: I'm away from the machine, but I'll get the details tomorrow and post here.#2016-11-2123:04bbloomare offline docs available? i’d like to be able to grep locally#2016-11-2123:23bbloomwget -r 2 did the trick nicely#2016-11-2213:52Matt ButlerIf there are 2 datums in the same transaction setting a :db.unique/value attribute to the same value do either go through/are they merged or is the tx thrown out completely returning the a db.error/unique-conflict?
When processing a large number of transactions for about 1/5 I get value: x already held by: 17592186703132 asserted for: 17592186703135 It seems always to be 3 ids apart. Can this be happening within a transaction or is it that there are 2 transactions being created with the same datum value x.#2016-11-2214:09tengI found myself normalizing ten statuses (e.g. APPLICATION_COMPLETED) into its own entity, so that instead of storing e.g. :user/status with the value ACCEPTED, I have the entity 'user-status' with all the valid statuses, and :user/status-id referring to that entity with an id. Is this idiomatic in Datomic (I often model it like this in traditional relational databases, but sometimes I don’t because of the improved readability to store it as a plain value).#2016-11-2214:38tengI changed back to using values. Felt like incidental complexity otherwise.#2016-11-2214:40karol.adamieci have an interesting problem. I want to build a basket with line items in it. I assign the basket to user using identity ref. So subsequent transactions for same user result in only one basket always. So far so good. Problem is now that my Line items collection is actually growing. Is there a way to model cardinality many on a ref in sucha way that it is not duplicating the line items but replaces them?
;Line item
{:db/id #db/id[:db.part/db]
:db/ident :item/quantity
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/doc "Quantity of item"
:db.install/_attribute :db.part/db}
{:db/id #db/id[:db.part/db]
:db/ident :item/part
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one
:db/doc "Line item ref to product"
:db.install/_attribute :db.part/db}
;Basket
{:db/id #db/id[:db.part/db]
:db/ident :basket/owner
:db/valueType :db.type/ref
:db/unique :db.unique/identity
:db/cardinality :db.cardinality/one
:db/doc "the owner of the basket"
:db.install/_attribute :db.part/db}
{:db/id #db/id[:db.part/db]
:db/ident :basket/items
:db/valueType :db.type/ref
:db/isComponent true
:db/cardinality :db.cardinality/many
:db/doc "Items in the basket"
:db.install/_attribute :db.part/db}
#2016-11-2214:42karol.adamiec{
:db/id #db/id[:db.part/user]
;notice the lookup ref usage to obtain references. Lovely.
:basket/owner [:user/email "#2016-11-2214:43karol.adamiecso a transaction above executed multiple times always edits the same basket, but line items grows instead of being totlly replaced ….#2016-11-2214:48marshall@karol.adamiec I believe there was a google group post some time back about implementing a transaction function called assertWithRetractions or something like that#2016-11-2214:48marshallhttps://groups.google.com/forum/#!topic/datomic/_wIgitHKo6A#2016-11-2214:49karol.adamiec@marshall so in general the upsert niceness ends when cardinality many comes into play? is that correct understanding?#2016-11-2214:49marshallupsert behavior only occurs on cardinality one attributes. the semantic of cardinality many requires that assertions be additive, not replacing#2016-11-2214:50karol.adamiecmakes perfect sense, thanks! will check out the thread.#2016-11-2214:50marshall👍#2016-11-2216:48karol.adamiec(d/transact conn [{:db/id #db/id [:db.part/user]
:db/ident :assertWithRetracts
:db/fn #db/fn {:lang "clojure"
:params [db e a vs]
:code "(vals (into (into {} (map (comp #(vector % [:db/retract e a %]) first) (datomic.api/q [:find '?v :where [e a '?v]] db))) (into {} (map #(vector % [:db/add e a %]) vs))))"}}])
CompilerException java.lang.RuntimeException: Can't embed object in code, maybe print-dup not defined: #2016-11-2216:49karol.adamieci picked the function from one of the threads, but no idea why it is not installing?#2016-11-2216:52karol.adamiechmm, worked through REST api transaction...#2016-11-2217:01karol.adamiecsucess. i can retract cardinality many with that little function, and assert new stuff in.#2016-11-2217:01karol.adamiecso back to my original query...#2016-11-2217:02karol.adamiec{
:db/id #db/id[:db.part/user]
;notice the lookup ref usage to obtain references. Lovely.
:basket/owner [:user/email “#2016-11-2217:03karol.adamiechow can i glue that together with transaction
[[:assertWithRetracts 17592186045431 :basket/items []]] like that?#2016-11-2217:04karol.adamiectwo separate calls or is there a way to wire the two together?#2016-11-2217:10karol.adamiecah, i think i can just put both into a transact vector and it will run in one transaction?#2016-11-2217:33lellis@karol.adamiec, yes it will!#2016-11-2217:36karol.adamiec@lellis i am struggling wityh how to connect the two. I need the eid of a basket to match a thing created or retrieved by first transaction. the lookup ref will not work in the same transaction (per docs) and i am unsure if they nest anyway. temp id maybe?#2016-11-2217:39karol.adamiec[
{
:db/id #db/id[:db.part/user -1]
;notice the lookup ref usage to obtain references. Lovely.
:basket/owner [:user/email “#2016-11-2217:40karol.adamieci would like that transaction data to tie in nicely, regardless whether the basket just got created or it was there already, but it seems to not work ;/#2016-11-2218:44lellisU want create and retract data in same transact right? @karol.adamiec ?#2016-11-2220:46zaneIs there something I can read to better understand query caching and how to optimize queries?#2016-11-2221:34geoffs@zane have you read these in the datomic docs? http://docs.datomic.com/query.html#clause-order#2016-11-2221:34geoffshas some information about both topics#2016-11-2221:35geoffsit's not a ton of info, but it has the basics#2016-11-2222:01karol.adamiec@lellis well i want to create basket entity or get ahold of it if exists. that entity has a collection of items that need to be reset, thet is why i need the custom dbfn. Default behaviour for cardinality many is adding stuff in. I need to replace the items collection instead.#2016-11-2222:31zane@geoffs: Thanks! Yeah, I'm aware of clause order and reducing the result set upfront.#2016-11-2223:07zaneWhat's the most efficient way to retrieve the most recent transaction id for a given entity?#2016-11-2223:07zaneIf I have the entity id.#2016-11-2223:07zaned/log?#2016-11-2223:07zaned/datoms with :eavt?#2016-11-2223:11zaned/history?#2016-11-2223:17zaned/entity-db?#2016-11-2223:22zaneFeels like definitely not d/history.#2016-11-2307:15robert-stuttaford@zane, if you want the latest transaction for an entity, you’d need to query all its current attributes#2016-11-2307:16robert-stuttaford[:find (max ?tx) :in $ ?e :where [?e _ _ ?tx]] is one approach#2016-11-2307:29grav@marshall Ok, so regarding the Critical failure, cannot continue: Heartbeat failed error:
- OS: Mac OS X
- transactor command: ./bin/transactor -Xmx4g -Xms4g -Ddatomic.peerConnectionTTLMsec=20000 -Ddatomic.txTimeoutMsec=20000 config/samples/dev-transactor-template.properties
- restore command: ./bin/datomic -Xmx4g -Xms4g restore-db file:/Users/mgn/Downloads/import-2016-11-08 datomic:
- transactor log:
45741-2016-11-21 10:51:24.134 INFO default datomic.kv-cluster - {:event :kv-cluster/create-val, :val-key "5821b80f-e2af-4e7d-a02e-8fb9838bfd56", :bufsize 15561, :phase :begin, :pid 36803, :tid 64}
45742:2016-11-21 10:51:24.153 WARN default datomic.backup - {:message "error executing future", :pid 36803, :tid 10}
45743-java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: org.h2.jdbc.JdbcSQLException: Connection is broken: "java.net.ConnectException: Connection refused (Connection refused): localhost:4335" [90067-171]
45744- at java.util.concurrent.FutureTask.report(FutureTask.java:122) [na:1.8.0_111]
45745- at java.util.concurrent.FutureTask.get(FutureTask.java:192) [na:1.8.0_111]
45746- at datomic.common$pfuture$reify__319.deref(common.clj:587) ~[datomic-transactor-pro-0.9.5407.jar:na]
45747- at clojure.core$deref.invokeStatic(core.clj:2228) ~[clojure-1.8.0.jar:na]
45748- at clojure.core$deref.invoke(core.clj:2214) ~[clojure-1.8.0.jar:na]
45749- at datomic.backup.ValueRestore.restore_node(backup.clj:446) ~[datomic-transactor-pro-0.9.5407.jar:na]
45750- at datomic.backup.ValueRestore.restore_node(backup.clj:437) ~[datomic-transactor-pro-0.9.5407.jar:na]
45751- at datomic.backup$restore_db$fn__9032$fn__9035.invoke(backup.clj:660) ~[datomic-transactor-pro-0.9.5407.jar:na]
45752- at datomic.backup$restore_db$fn__9032.invoke(backup.clj:656) ~[datomic-transactor-pro-0.9.5407.jar:na]
45753- at clojure.core$binding_conveyor_fn$fn__4676.invoke(core.clj:1938) [clojure-1.8.0.jar:na]
45754- at clojure.lang.AFn.call(AFn.java:18) [clojure-1.8.0.jar:na]
45755- at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_111]
45756- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_111]
45757- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_111]
45758- at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
45759-Caused by: java.util.concurrent.ExecutionException: org.h2.jdbc.JdbcSQLException: Connection is broken: "java.net.ConnectException: Connection refused (Connection refused): localhost:4335" [90067-171]
45760- at java.util.concurrent.FutureTask.report(FutureTask.java:122) [na:1.8.0_111]
45761- at java.util.concurrent.FutureTask.get(FutureTask.java:192) [na:1.8.0_111]
45762- at datomic.common$pfuture$reify__319.deref(common.clj:587) ~[datomic-transactor-pro-0.9.5407.jar:na]
45763- at clojure.core$deref.invokeStatic(core.clj:2228) ~[clojure-1.8.0.jar:na]
45764- at clojure.core$deref.invoke(core.clj:2214) ~[clojure-1.8.0.jar:na]
45765- at datomic.backup.ValueRestore$fn__8956.invoke(backup.clj:422) ~[datomic-transactor-pro-0.9.5407.jar:na]
45766- at datomic.backup.ValueRestore.restore_val(backup.clj:419) ~[datomic-transactor-pro-0.9.5407.jar:na]
45767- at datomic.backup.ValueRestore$fn__8966$fn__8967.invoke(backup.clj:444) ~[datomic-transactor-pro-0.9.5407.jar:na]
45768- ... 6 common frames omitted
45769-Caused by: org.h2.jdbc.JdbcSQLException: Connection is broken: "java.net.ConnectException: Connection refused (Connection refused): localhost:4335" [90067-171]
45770- at org.h2.message.DbException.getJdbcSQLException(DbException.java:329) ~[h2-1.3.171.jar:1.3.171]
45771- at org.h2.message.DbException.get(DbException.java:158) ~[h2-1.3.171.jar:1.3.171]
45772- at org.h2.engine.SessionRemote.connectServer(SessionRemote.java:399) ~[h2-1.3.171.jar:1.3.171]
#2016-11-2308:12gravOh, I get some exceptions before that, eg:
2016-11-23 09:07:43.231 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StoragePutBackoffMsec 0, :attempts 0, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 64}
2016-11-23 09:07:43.231 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StorageGetBackoffMsec 0, :attempts 0, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 87}
2016-11-23 09:07:43.283 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StoragePutBackoffMsec 50.0, :attempts 1, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 64}
2016-11-23 09:07:43.306 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StorageGetBackoffMsec 50.0, :attempts 1, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 87}
2016-11-23 09:07:43.385 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StoragePutBackoffMsec 100.0, :attempts 2, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 64}
2016-11-23 09:07:43.408 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StorageGetBackoffMsec 100.0, :attempts 2, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 87}
2016-11-23 09:07:43.586 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StoragePutBackoffMsec 200.0, :attempts 3, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 64}
2016-11-23 09:07:43.613 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StorageGetBackoffMsec 200.0, :attempts 3, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 87}
2016-11-23 09:07:43.987 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StoragePutBackoffMsec 400.0, :attempts 4, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 64}
2016-11-23 09:07:44.018 INFO default datomic.kv-cluster - {:event :kv-cluster/retry, :StorageGetBackoffMsec 400.0, :attempts 4, :max-retries 9, :cause "org.h2.jdbc.JdbcSQLException", :pid 47395, :tid 87}
2016-11-23 09:07:44.310 INFO default datomic.process-monitor - {:tid 11, :StoragePutMsec {:lo 0, :hi 18500, :sum 134937, :count 1898}, :AvailableMB 3190.0, :StorageGetMsec {:lo 0, :hi 3370, :sum 22493, :count 1917}, :pid 47395, :event :metrics, :StoragePutBytes {:lo 5641, :hi 19880, :sum 29298291, :count 1903}, :MetricsReport {:lo 1, :hi 1, :sum 1, :count 1}, :StoragePutBackoffMsec {:lo 0, :hi 400, :sum 750, :count 5}, :StorageGetBackoffMsec {:lo 0, :hi 400, :sum 750, :co
#2016-11-2309:23jonpitherHi - Anyone setup logstash with Datomic (am using the AMI at present) - any tips appreciated!#2016-11-2310:22karol.adamiecmorning, i will shamelessly repeat my question i case anyone missed it 🙂.
[
{
:db/id #db/id[:db.part/user -1]
;notice the lookup ref usage to obtain references. Lovely.
:basket/owner [:user/email "
Why the above is not working? is #db/id[:db.part/user -1] working only inside an ‘expression’ and not in a whole transaction? Is there any way to tie that in so i do not have to write logic in code? 🤓#2016-11-2311:48karol.adamiecBTW: what is transactional semantics in transact? all forms from the vector are part of one transaction i assume?#2016-11-2312:02drankardDoes anyone have an example of how to run gc-storage from the REST API ?#2016-11-2312:12jonpitherfollowing the datomic SQL script to creata the Datomic RB and I get ERROR: permission denied for tablespace pg_default in RDS#2016-11-2314:01zane@robert-stuttaford: Yeah, that's our current implementation. It's not particularly performant so we were looking for something faster. One option is to have an explicit attribute for updatedAt, but we're trying to avoid that.#2016-11-2314:38pheuterWe’d like to upgrade the process count on our current Datomic Pro License, how can we do so? Doesn’t seem like there’s a way via http://my.datomic.com#2016-11-2315:56Alex Miller (Clojure team)contact <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> with any questions#2016-11-2409:48karol.adamiechi, have a question about UUID types. Docs recommend 'Squuids are valid UUIDs, but unlike purely random UUIDs, they include both a random component and a time component.’
Q1: Is there a way to ask Peer over REST for a Squuid?
Q2: I can generate v1 (time based) or v4 (RNG based). Is there a preference in case i am unable to use Squuids ?#2016-11-2409:52rauh@karol.adamiec The way they're created is no secret. It's a few lines of code. There is implemenation in java, javascript/cljs and lua and possibly more.#2016-11-2409:54karol.adamiecabsolutely right. my google fu got weak in the morning 😄#2016-11-2410:10karol.adamiecso basically it is replacing the first part of uuid v4 with a timestamp. :+1:#2016-11-2410:16rauh@karol.adamiec I did lua for openresty: https://gist.github.com/rauhs/b93bcf0d676f0335fd483d7c7c77303d#2016-11-2410:16seantempestaHow could I get the minimum diff (in datoms) between two databases? (same database, different points in time)#2016-11-2410:32seantempestaah, never mind. I found this article. https://blog.jayway.com/2012/06/27/finding-out-who-changed-what-with-datomic/#2016-11-2410:55karol.adamiecJavascript ES6 impl of Squuids, if anyone fancies 🙂 Thanks for hints @rauh
const [_,_,_,_,_,_,_,_, ...rest] = uuid();
Math.round((new Date()).getTime() / 1000).toString(16) + rest.join('’);
#2016-11-2412:07karol.adamiecdo lookup refs nest? it would be nice to be able to use that as navigation on identities. IE i have uniqe identity :user/email. i can always grab the user entity using [:user/email “. But then i have a linked entity that has a ref type that is also marked as uniqe identity that is binding to user entity. I would like to grab that entity by saying [:entity/owner [:user/email “#2016-11-2412:09karol.adamieci can upsert the :entity/, but i need its id to issue [:db.fn/retractEntity id-of-janes-basket]#2016-11-2412:12karol.adamiecit goes agaist the grain a bit though. I would never need that if not in the REST land ;/#2016-11-2413:41karol.adamiechow does one retrieve a SCALAR value over REST??
[:find ?eid . :in $ :where [?eid :basket/owner 17592186045429]] is not working. Same Q on clojure repl returns scalar value.#2016-11-2413:41karol.adamiecworks without the dot .#2016-11-2413:41karol.adamiecover rest, but i want scalar 😞#2016-11-2415:33karol.adamiec"At present, find specifications other than relations (and also pull specifications) are not supported via Datomic's provided REST API. "#2016-11-2415:36karol.adamiecguys seriously PLEASE PLEASE PLEASE!!! make a fix for REST API to return FULL errors instead of 500 status blackbox. This is a timesink of immense proportions and a hair pulling , head bashing ocean of frustration 😱#2016-11-2416:38bhagany@karol.adamiec fwiw, I do [?eid] and then unpack the result in cases like this#2016-11-2416:39karol.adamiec@bhagany over REST?#2016-11-2416:39bhaganyyes#2016-11-2416:39karol.adamiecfor me it throws#2016-11-2416:39bhaganyhrm, I’m a few versions back, maybe they changed it#2016-11-2416:39karol.adamiec[:find [?eid] …#2016-11-2416:39bhaganythat’s unfortunate#2016-11-2416:40karol.adamiecbasically i can make basic one, but the response is [[234123412]]#2016-11-2416:40karol.adamiecso i unpack it ;(#2016-11-2416:41karol.adamiec@bhagany would you say +1 to having errors from REST endpoint? or is it only me that gets constantly frustrated?#2016-11-2416:41bhaganyoh, I’m definitely with you there. endlessly frustrating.#2016-11-2417:23leovhi. quick question - datomic-free says " No matching ctor found for class clj_logging_config.log4j.proxy$org.apache.log4j.WriterAppender$ff19274a"#2016-11-2417:23leovdo I miss a library?#2016-11-2506:52robert-stuttaford@zane, why isn’t it performant? i can also think of a way to combine d/datom scans of eavt + vaet for an e, looking for the highest t#2016-11-2511:56jonpitherHow does the Memcached setup work - can peers have their own memcached instances that would become warmed up to their needs, or is it a secondary level distributed cache that all peers would use the same way?#2016-11-2512:43mitchelkuijpersAre there any people here who have some experience with saving money values in datomic? I am leaning towards bigint and then simply saving dollar values#2016-11-2513:07karol.adamiec@mitchelkuijpers i went with using long/bigint and saving cents. Then for display just divide by 100 and attach currency symbol.#2016-11-2513:08mitchelkuijpersYeah that was also my plan, Not sure is a long is alway big enough that is the reason I am leaning towards bigint#2016-11-2513:09karol.adamiecbut i did what i did because of javascript compatibility. I would use decimal otherwise#2016-11-2513:55robert-stuttaford@jonpither memcached as you need it. we connect everything to one cluster right now, but eventually backend webservers would have their own vs end-user webservers#2016-11-2513:56robert-stuttafordadvantage of one is that transactor pushes live index into the one it’s connected to#2016-11-2513:56jonpitherok - let's say you did that, does the transactor still need to be aware of all the various memcacheds out there?#2016-11-2513:56robert-stuttafordwhich turns memcached into the primary datastore and ddb the near-line backup 🙂#2016-11-2513:57robert-stuttafordi know you can only give txor one. i presume that any other url given to a peer but not txor would essentially be a private 2nd tier for all who share it. e.g. @stuarthalloway mentioned having a memcached on his computer for a production transactor (so remote repl queries are faster)#2016-11-2514:05jonpitherso you can give a peer a memcached and not tell the transactor?#2016-11-2514:06robert-stuttafordyes. you can give different memcached to different peers#2016-11-2514:07robert-stuttafordtransactor uses it as its own 2nd-tier cache (for the queries it does) and also writes live-index segments there. all other peers will use which ever they connect to as 2nd-tier cache. if it’s the same one for everyone, you obviously get leverage#2016-11-2514:28jonpithercool - thanks#2016-11-2516:05pesterhazyI'm working with a cardinality many ref. Does anyone have a transaction fn at hand that replaces all current refs with a new set?#2016-11-2516:06karol.adamiec@pesterhazy haha, i had the same days ago#2016-11-2516:06karol.adamiecin general i turned away from that solution#2016-11-2516:06karol.adamiecand i do a built in fn for that :db.fn/retractEntity#2016-11-2516:08karol.adamiecthe function i tried to use before had issues ;/ but here it is:
;;DB function that allows to replace collection with a new (or empty ) colection
{:db/id #db/id [:db.part/user]
:db/ident :assertWithRetracts
:db/fn #db/fn {:lang "clojure"
:params [db e a vs]
:code "(vals (into (into {} (map (comp #(vector % [:db/retract e a %]) first) (datomic.api/q [:find '?v :where [e a '?v]] db))) (into {} (map #(vector % [:db/add e a %]) vs))))"}}
#2016-11-2516:08karol.adamieccredit to google groups user….#2016-11-2516:08marshall@jonpither you can give the transactor multiple memcached instances and it will push segments to all of them#2016-11-2516:08pesterhazy@karol.adamiec interesting#2016-11-2516:09pesterhazymy use case is I want to re-create all dependent entities and remove old ones in a single tx#2016-11-2516:09pesterhazyideally the caller shouldn't need to know the entids of the dependent entities#2016-11-2516:10pesterhazybut obv not sure that this is the right approach#2016-11-2516:10karol.adamiecif you mark dependants as isComponent they will be retracted#2016-11-2516:10pesterhazywell I don't want to delete the original entity, only update it!#2016-11-2516:11karol.adamiecwell, i compromised on that 🙂. the origianl entity in my use case is not linked to anything other than owner, so it is ok for me#2016-11-2516:12karol.adamiectry the fn i pasted, it works if you pass ID . I had problems with it working with tempids or lookup refs#2016-11-2516:14rauh@pesterhazy https://gist.github.com/rauhs/0704f6492674ea79e935a9e01ac3a483#2016-11-2516:20pesterhazy@rauh, that looks great#2016-11-2516:27pesterhazycode looks a bit scary#2016-11-2516:28pesterhazy@rauh, could you give an example of how to use it?#2016-11-2516:29pesterhazythe negative numbers in the gist refer to tempids?#2016-11-2516:30pesterhazyand why do you have to supply a tempid and actual id for each value?#2016-11-2516:35pesterhazyhere's a simpler version: (defn replace-refs [db e attr vs]
(->> e
(d/q [:find '[?v ...] :in '$ '?e :where ['?e attr '?v]] db)
(map (fn [v] [:db/retract e attr v]))
(concat (map (fn [v] [:db/add e attr v]) vs))))#2016-11-2517:09rauh@pesterhazy There is an example in the doc string and just right afterwards is another one as a #_ comment#2016-11-2517:09rauhWhich you can run on the db (it will return you the transaction generated by the fn)#2016-11-2517:10rauhIf you already have all entities in your db, then you can nil the tempids, they won't be touched#2016-11-2517:10rauhBut it's flexible enough that you can add new entities at the same time.#2016-11-2517:18jonpithermanaged to crash DT on a load-test - Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "hornetq-expiry-reaper-thread"#2016-11-2517:26pesterhazy@rauh, thanks for the explanation#2016-11-2518:05rauh@pesterhazy I've seen the simpler version of this too but decided to go with my own one. It'll properly work even if you specify idents, lookup-refs or entity ids. It'll also work when you :db/add an already existing entity (the above version would fail). And in addition if you add some new entity (with a tempid) you can ref that in the same transaction and assert it. It'll just work. With the simpler version you'd have to transact twice, making your transaction history less transparent on what happened. I did just simplify my gist a little bit.#2016-11-2607:25robert-stuttafordrad @marshall good to know about multiple memcached for txor#2016-11-2717:53jarppeWhat does it mean when during stress testing Datomic logs "Critical failure, cannot continue: Heartbeat failed"?#2016-11-2717:55jarppeI'm making a lot of transactions and using DynamoDB#2016-11-2717:58jarppeOn AWS console I see a "critical alert" and "Consumed write capacity >= 40 for 5 minutes", so it seems that DynamoDB does not like me anymore, but should that kill transactor completely?#2016-11-2815:00marshallDatomic 0.9.5530 is now available https://groups.google.com/d/topic/datomic/xeo9my3gC88/discussion#2016-11-2815:18robert-stuttafordhttp://blog.datomic.com/2016/11/datomic-update-client-api-unlimited.html#2016-11-2815:19robert-stuttafordholy wow#2016-11-2815:19robert-stuttafordsuddenly a lot of old information riding around in my head 🙂#2016-11-2815:19robert-stuttafordthis is hugely exciting, @marshall !#2016-11-2815:19marshall@robert-stuttaford We certainly think so#2016-11-2815:19marshall😄#2016-11-2815:20karol.adamiecyay. Client API :+1: . Please add javascript soon 😄#2016-11-2815:20robert-stuttaford@marshall the post mentions the client library is open source. on github? or is that coming soon?#2016-11-2815:20robert-stuttaford… java release shortly. nm!#2016-11-2815:22marshallCurrently the source for the clients is provided via source jar.
The clients are currently in alpha, but we are working to move the APIs to their final state and provide the additional documentation, tooling, etc. required to build/fork/modify them. We wanted to get clients into our customer's hands and start getting feedback as soon as possible.#2016-11-2815:22marshallThe source jars are in maven central#2016-11-2815:25robert-stuttafordok, great. i’ll have a look soon#2016-11-2815:30robert-stuttaford@marshall, i haven’t read the comparison doc, yet, but will clients support a permissions model akin to sql GRANT USER READ?#2016-11-2815:31robert-stuttafordone thing that’s worried me about peers is the ease with which d/delete-database can be invoked at a repl#2016-11-2815:31marshallThere will be some controls around those sorts of administrative capabilities. As of now, clients can’t create or delete databases#2016-11-2815:32robert-stuttafordok. that’s great. i can immediately see some of our apps dropping to pure client#2016-11-2815:33robert-stuttafordwhat’s the wire protocol for clients? also tcp + fressian things?#2016-11-2815:33robert-stuttafordlooks like it, if it also has the cache!#2016-11-2815:34robert-stuttafordwow. this is fantastic. great work!#2016-11-2815:35robert-stuttafordclients give you a way to work with many large databases by working with multiple peer servers#2016-11-2815:36robert-stuttafordi was wondering how one would work with multiple 10bn-datom dbs in a single app, looks like Clients gives us one way#2016-11-2815:36robert-stuttafordthinking far into the future now#2016-11-2815:36marshallcertainly the hope is that these additions will enable more architectural flexibility#2016-11-2815:37robert-stuttafordare non JVM clients planned?#2016-11-2815:38robert-stuttafordlike, say, JavaScript? :-)))#2016-11-2815:39robert-stuttafordof course, permissions model vital for this, which may mean lots of work still to do. i don’t see anything in the comparison that excludes javascript#2016-11-2815:39marshallit is absolutely something we want to support#2016-11-2815:39marshalltimeline TBD#2016-11-2815:40robert-stuttafordi think it’s a great testament to Datomic’s design that so little of the model had to change to support Clients. i totally get the tempid variant#2016-11-2815:40robert-stuttafordexcellent 🙂#2016-11-2815:41ckarlsenam I dreaming?#2016-11-2815:42robert-stuttafordvery lucidly 🙂#2016-11-2815:44val_waeselynck@marshall you guys rock!#2016-11-2815:44ckarlsenbest christmas present ever 🙂#2016-11-2815:46marshall🙂#2016-11-2815:46curtosisexcellent news!#2016-11-2815:58curtosishmm… have to think about how the new Starter license terms change things. I think they make sense, but it’s a model I don’t think I’ve seen before.#2016-11-2815:59ljosathe starter terms seem great for evaluating, for smuggling a system into production so it can prove its worth before buying a license, or for startups.#2016-11-2816:00curtosisthough it does break the “build a small cheap app to run on DynamoDB and leave it alone” model.#2016-11-2816:01kirill.salykinwow!#2016-11-2816:02val_waeselynck@ljosa I can relate to that 🙂#2016-11-2816:02ljosaWe started with the old "two free peers" starter license, but that was too limiting even before it was in production. Then we bought 22 licenses, thinking that we would have a bunch of peers. As it turns out, we have fewer than 10 peers, for it was more convenient to write a server that answers queries (datomic or ADI) on behalf of most of our clients. Even for clients that are not microservices, it was good to avoid the cold object cache at startup.#2016-11-2816:07curtosismy model for this particular kind of app is really just a single peer - more like embedded, but with non-filesystem storage (either Dynamo or Postgresql). It’s a bit of a weird use case, perhaps, but there’s something to be said for using the existing storage backup mechanisms rather than managing file-based backups.#2016-11-2816:08curtosisBut that’s maybe just pushing complexity around without really making any real difference.#2016-11-2816:39Drew VerleeIf i just want to play with Datomic for learning purposes, whats the best route? Datomic Starter from http://www.datomic.com/get-datomic.html?#2016-11-2816:47Alex Miller (Clojure team)yes, you can do everything you need to with that with no initial cost#2016-11-2816:54dm3quite a significant release there!
Does anyone know how to interpret the "Maintenance/Updates Limited to 1 Year" under the new Datomic Starter licence terms?
Does this mean that if you get the licence for 0.9.5530 now and when another version is released after 1+ years you need to get a new (paid?) licence?#2016-11-2816:56luposlipHi @alexmiller, I have a license on Starter that expired the 11th of November. The transactor and peer I use is version 0.9.5344. Can that be upgraded?#2016-11-2816:56Alex Miller (Clojure team)I’m not the right person to answer that - check with @marshall or email <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>#2016-11-2816:58luposlipAlright - @marshall, I have a license on Starter that expired the 11th of November. The transactor and peer I use is version 0.9.5344. Can that be upgraded?#2016-11-2816:59Alex Miller (Clojure team)@dm3 I defer to someone from the Datomic team for anything official. but my understanding is that after 1 year, you can continue using the versions you have with your license (that’s the “perpetual” part) but that if you want to upgrade past that point, you have to acquire a paid license.#2016-11-2817:03marshallAlex’s interpretation is correct#2016-11-2817:03bhaganywhatttttttttttttttttttttttttttttttttttttttttttttttttttttttttt this is amazing#2016-11-2817:03marshallThe renewal only covers the ability to use newer versions of the software.#2016-11-2817:03marshallStarter was always intended as a path for customers to explore and use Datomic in a low-risk, low-cost approach as they developed their applications and moved toward production. We feel that a year is generally sufficient time to evaluate a product and develop a business application around that product. If you feel that you require a longer period to evaluate or develop against Datomic, please contact us.#2016-11-2817:04marshallThat includes if your Starter license maintenance has recently expired and you’d like to discuss the option to evaluate the latest release(s).#2016-11-2817:04marshallYou can always email me at <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>#2016-11-2817:05bhaganymy boss is so happy that we paid for a $3k license last week#2016-11-2817:05bhagany(sorry, datomic team)#2016-11-2817:28chadhskind of a bummer that the free starter pro tier went away for small projects. i suppose the idea is the free tier can suffice for that and needs beyond that you should pay for.#2016-11-2817:28Alex Miller (Clojure team)starter pro is still free#2016-11-2817:28chadhsfor one year only i thought?#2016-11-2817:29Alex Miller (Clojure team)you can use it forever, just can’t keep upgrading after a year#2016-11-2817:32chadhsright, i understand that alex thnx. it just seems like an odd limitation. i liked the idea of being able to upgrade / renew that so you had a way to mock a production setup easily. then if you were going to roll something out for “real” for yourself or a client you could by the appropriate licensing. Just my 2c#2016-11-2817:34pesterhazypersonally my experience has been that old datomic versions don't just stop working; I only update for new features/significant bug fixes#2016-11-2817:36chadhsalso, people that registered in the past when it was renewable won’t get to download the lastest version and play with things like memcache integation etc without creating a new account.#2016-11-2817:37chadhsi mean all this as helpful critique. not at all “whining” about the idea of having to buy datomic; i think people should pay for it in production and not try to “get by” with starter pro personally.#2016-11-2817:37jonas> With today’s release, we are making available the alpha version of the open source Client library for Clojure
Is the source available somewhere?#2016-11-2817:37chadhshappy to move this to mailing list as well to leave room for Qs#2016-11-2817:37andyparsons@marshall piling on- this is all great news. One question: what is the definition of "system" for the new pro pricing? As in, "ongoing maintenance fees of $5,000 per year per system"#2016-11-2817:37marshallGreat question#2016-11-2817:38marshallSystem is a production Transactor and it’s connected peers/clients#2016-11-2817:38andyparsonsgot it#2016-11-2817:38marshallso you can still run unlimited dev/staging/test instances#2016-11-2817:38marshallbut if you need 2 separate live production transactors, then that’s two licenses#2016-11-2817:39marshall@jonas The source is provided as a source jar. it is in Maven Central#2016-11-2817:39chadhs@marshall would a typical deployment then be ~$10k for primary and backup transactor?#2016-11-2817:39marshallno, sorry, HA doesn’t count#2016-11-2817:39jonas@marshall thanks#2016-11-2817:40marshallif you have a single transactor (+HA) + your peers/clients - that’s $5000 per year#2016-11-2817:40jonasare you planning to push it to https://github.com/datomic?#2016-11-2817:40chadhsso “system" is transactor, ha, and connected peers?#2016-11-2817:40chadhsoh just saw your answer; thanks @marshall !#2016-11-2817:40marshall👍#2016-11-2817:41marshall@jonas The clients are currently in alpha, but we are working to move the APIs to their final state and provide the additional documentation, tooling, etc. required to build/fork/modify them. We wanted to get clients into our customer's hands and start getting feedback as soon as possible. We would also love to have feature requests/feedback/etc on clients (and on all parts of Datomic). We've recently set up a system with http://Receptive.io to gather customer feedback. You can access it from the top nav bar of your http://my.datomic.com dashboard via the "Suggest Features" link#2016-11-2817:41andyparsonscongrats @marshall and team, this is a big deal for us (and for my ability to recommend Datomic without reservation to other teams)#2016-11-2817:41marshall@andyparsons Thanks! We’re really excited and glad you are too.#2016-11-2817:46bbloomGlad to see the string-based tempids thing! I noted the comment about the underutilization of partitions. I’m curious: Does anybody actually make good use of partitions? What’s the impact of using them?#2016-11-2817:48bbloomand then i guess the question is how are they automatic now? Just everything in one big default partition? Or dynamically partitioned somehow?#2016-11-2817:59jonas@marshall I can’t see the “Suggested Features” link at http://my.datomic.com for some reason#2016-11-2818:00marshall@jonas You need to have a license in the account. Do you have a Starter license on the account you’ve logged in with?#2016-11-2818:00jonasOk, I don’t have that yet#2016-11-2818:00marshallYep, the link will show up once you have a license in the account#2016-11-2818:18timgilbertThis is awesome, thanks guys. Definitely looking forward to more info about the tempid changes.#2016-11-2818:37dpsuttonthere's a hacker news article about datomic right now if you're interested in reading the comments: https://news.ycombinator.com/item?id=13055961#2016-11-2819:17weijust saw the article, thanks for the licensing change! am also a fan of tempid improvements#2016-11-2819:19haywoodwow, I just started a project with datomic and these changes are amazing!#2016-11-2820:06ljosaOne of our developers on the javascript/golang side just expressed dismay over the announcement that "with the introduction of client libraries, the REST server is no longer supported for new development." Why not keep supporting an HTTP API?#2016-11-2820:29marshallThe REST api will continue to ship with Datomic; our development focus will be on clients - including clients for other non-jvm languages#2016-11-2822:26danielcomptonhttps://danielcompton.net/2016/11/29/guide-to-datomic-licensing-changes#2016-11-2822:26danielcomptonI wrote a guide to the licensing changes, let me know if I've made any mistakes#2016-11-2822:37sparkofreasonIs anybody using Datomic with AWS Lambda? I had read somewhere that this wouldn't work, but seems like the new chunked query API makes it a nice fit for processing large query result sets.#2016-11-2823:50Matt ButlerHi, Is there a max number of datums for a single transaction?#2016-11-2912:57ustunozgurThe licensing changes seem to be 2 steps forward, 1 step back. However, from a business perspective, I do sympathize with Cognitect.#2016-11-2913:02kardanYes, I fully understand that there need to be a path to pay. But I do find myself asking myself if I think my next project will become serious enough in a year to make me want to pay that yearly fee.#2016-11-2913:03kardanBut I know too little about Datomic to really say much. Maybe running on the same version is totally fine#2016-11-2913:30robert-stuttafordconsidering what hassles using Datomic has saved us, the price is very cheap.#2016-11-2913:32kardanI can image that. Reading back I might have sounded a bit harsh. But still something that one needs to be convinced about to take that path#2016-11-2913:33robert-stuttafordof course. i suppose i didn’t take much convincing. don’t regret the decision for a second. it’ll be 4 years in production in Jan#2016-11-2913:42robert-stuttafordhttps://twitter.com/RobStuttaford/status/803594325405868032#2016-11-2913:48robert-stuttaford@jaret @marshall typo on the tutorial Notice that /:db.cardinality/many captures ...#2016-11-2914:20jaret@robert-stuttaford thanks!#2016-11-2914:59staskis there a way to automagically create a database when running peer-server if it doesn’t exist yet?
i’m trying to build environment consisting of dev transactor and peer-server using docker-compose#2016-11-2915:19marshallPeer Server can ‘create’ memory DBs, but you’ll need to use a Peer to create dev (or other storage) databases#2016-11-2915:51jdubiei doesn’t seem like there is an index for this but is there anyway to get a vector or lazy-seq of all entity ids in a datomic database?
these both throw exceptions
(datomic.api/index-range db :db/id nil nil)
(datomic.api/q '[:find ?e
:in $
:where [?e :db/id]]
db))
CompilerException java.lang.IllegalArgumentException: :db.error/not-an-entity Unable to resolve entity: :db/id, compiling: ...
...
datomic.api/index-range api.clj: 178
datomic.db.Db/indexRange db.clj: 1747
datomic.db/attr-index-range db.clj: 799
datomic.db/require-id db.clj: 555
datomic.error/arg error.clj: 55
datomic.error/arg error.clj: 57
datomic.impl.Exceptions$IllegalArgumentExceptionInfo: :db.error/not-an-entity Unable to resolve entity: :db/id
data: {:db/error :db.error/not-an-entity}
clojure.lang.Compiler$CompilerException: java.lang.IllegalArgumentException: :db.error/not-an-entity Unable to resolve entity: :db/id, compiling: …
#2016-11-2915:55stask@marshal thanks, i was hoping to be able to create a full environment (transactor, peer-server, applications using client api) using docker-compose, will probably add some utility that uses peer library and just creates new db after transactor starts and before the peer-server starts#2016-11-2916:08marshall@stask Yep - you should be able to ‘script’ that via a Peer#2016-11-2916:45timgilbertHey, looks like the Clojure API docs here are out of date: http://docs.datomic.com/clojure/index.html#datomic.api/log
...given that the memory database does now support the log API: http://blog.datomic.com/2016/08/log-api-for-memory-databases.html#2016-11-2916:46marshall@timgilbert Thanks - i’ll fix it!#2016-11-2916:47timgilbertThanks @marshall! Also, can you point me to any docs for (d/history) apart from the docstring in the API docs?#2016-11-2916:48marshall@timgilbert http://docs.datomic.com/filters.html#history and http://docs.datomic.com/best-practices.html#use-history#2016-11-2916:48marshallalso some discussion here: http://blog.datomic.com/2014/08/stuff-happens-fixing-bad-data-in-datomic.html#2016-11-2916:49timgilbertAwesome, thanks#2016-11-2916:50marshall👍#2016-11-2918:35shaunxcodeare there any published details on the implementation of the peer server/client e.g. what network protocol is it using etc?#2018-03-2213:36laujensenthanks#2018-03-2213:49alexkI’m interested in testing a couple small datomic queries in unit tests. I realize I could spin up an in-memory db but I think there’s a way to use a plain Clojure data structure as the db, is that true? What shape would it need to have so that something like the q and transact functions would work with it (treat it as a real database)?#2018-03-2213:49alexkI’m interested in testing a couple small datomic queries in unit tests. I realize I could spin up an in-memory db but I think there’s a way to use a plain Clojure data structure as the db, is that true? What shape would it need to have so that something like the q and transact functions would work with it (treat it as a real database)?#2018-03-2213:50val_waeselynckTransact won't work with anything else than a Datomic connection.#2018-03-2213:58alexkAlright, that’s too bad, it means I’d have to build the db by hand and wouldn’t be able to test anything that involves the transact function. How about the q function - would it work with a plain data structure?#2018-03-2213:58alexkAnd the entity function too, I guess…#2018-03-2214:00val_waeselynck@U8ZA3QZTJ what advantages do you see in not using an in-memory db for unit testing ?#2018-03-2214:09colindresjIf you’re looking to isolate your transactions against a DB during unit tests, https://github.com/vvvvalvalval/datomock is work fairly well#2018-03-2214:24alexkThe rationale would be to minimize the amount of code that the test interacts with. I’m not completely against using in-memory databases in unit tests, but I was hoping I could even avoid that! Thanks, both of you#2018-03-2214:29val_waeselynckFor Datalog querying, you could just use a vector of tuples, but you will have less confidence that the query behaves the same as against a real db.#2018-03-2214:29alexkFair enough, thanks#2018-03-2214:26laujensen@marshall - Ive just tried running a backup-db on one of our shards to import it locally using restore-db, but on import I see that several transactions arent carried over. They are visible on the live system, but not locally. And they’re about 1 day old. What causes this?#2018-03-2214:52marshalldid you restore into a clean (empty) storage? restoring on top of an existing DB that has diverged from the original source isn’t supported#2018-03-2215:31laujensenThats it, thanks#2018-03-2214:37stijndoes anyone have any experience running aws lambda with clojure connecting to datomic-cloud? any gotchas (like uberjar size, startup time, ...) that I can avoid running into?#2018-03-2214:48stijnwhen i'm following the Getting Started guide, i'm seeing the following error when trying to create a database#2018-03-2214:48stijn(d/create-database client {:db-name "movies"})
ExceptionInfo Forbidden to read keyfile at s3://...#2018-03-2214:49stijnall steps before worked properly#2018-03-2214:49Alex Miller (Clojure team)that’s an issue with your aws creds#2018-03-2214:49stijnyes, but how can I specify the AWS credentials to the client?#2018-03-2214:49stijni'm using a aws profile to authenticate#2018-03-2214:49Alex Miller (Clojure team)the normal ways - ~/.aws/credentials, AWS_ACCESS…#2018-03-2214:50Alex Miller (Clojure team)AWS_PROFILE#2018-03-2214:54stijnhmm that's still not working. is the client using the order of the aws java sdk for credentials?#2018-03-2215:00marshallthe client uses the default credentials provide#2018-03-2215:00marshallprovider#2018-03-2215:00marshallwhich has an implicit order, documented by AWS#2018-03-2215:00marshallhttps://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html#2018-03-2215:08stijnok, it seems like I'm running into this issue https://github.com/aws/aws-sdk-java/issues/803#2018-03-2215:09stijndelegation of credentials to a root profile doesn't work the same way in the java sdk and the cli 🙂#2018-03-2215:09marshallah. good find#2018-03-2215:33laujensen@marshall: I’ve tried partitioning the excisions into chunks of 100 entities. The indexing service has been running for 30+ minutes now with half-load on a quadcore system on just the first chunk. Is that to be expected? If so, I guess there’s no way around putting a heavy load on the live system until all 5000 pages have had their history removed?#2018-03-2216:42marshallYes, excision is very expensive. It can require re-writing major portions of the index#2018-03-2216:33folconHi, just to check is datomic rest the only way to connect to datomic from another language?#2018-03-2216:45marshall@folcon At present, yes. We provide Clojure and Java versions of the Peer library and a Clojure Client library.
We intend to publicize the client wire protocol in the future to allow other language clients (see: https://www.datomic.com/cloud-faq.html#_will_you_publish_the_client_protocol_so_i_can_write_my_own_datomic_client)#2018-03-2216:49folcon@marshall Thanks, if you wouldn’t mind me asking, I have two questions.
1) Is there anyway to secure datomic rest other than sitting a nginx with basic auth in front of it? I’ve successfully managed to deploy it and it’s working and now need to work out at least a basic level of security.
2) How would I go about writing a subquery then? I’m trying to ensure that all queries to the system respect some level of auth, so I want to ensure that each user only looks at a database which contains their datoms. My understanding is that this is possible by first querying what a user can see, and then passing on the user query to the resulting database. I’m just unsure how to specify that to datomic rest.#2018-03-2216:59JJ@folcon https://docs.datomic.com/on-prem/rest.html#2018-03-2216:59JJin case you miss the no more development notice#2018-03-2217:01folconI am aware it’s been deprecated for some time, I’m just wondering if there exists anything for this? I’m basically bumbling my way with these unfortunately :)… And unfortunately the client api’s don’t seem targeted to what I’m doing at the moment. I can’t access them from clojurescript as far as I can see, and running yet another server just to act as a clojure proxy to talk to datomic seems a little extreme?#2018-03-2217:02JJexposing datomic directly to the internet seems more extreme to me, even if behind nginx#2018-03-2217:04JJbut nginx has a lot features and is highly configurable, maybe you can use something like openresty#2018-03-2217:06marshall@folcon “running another server just to act as a clojure proxy” is exactly what the REST service is. It is a Datomic Peer that consumes HTTP calls and executes them against Datomic#2018-03-2217:07marshallIf you want native access from your language of interest and are already on the Peer model, I’d suggest writing a Peer-based system that consumes whatever it is your application uses (http/tcp/other) and do it that way#2018-03-2217:16folconThat’s a valid point, however for the moment the rest service does meet most of my needs. Is my understanding of my second question correct or should I be trying something different?
I’m having a fair bit of difficulty finding information about what is the best way of querying against a subset of the datoms in the database.#2018-03-2217:18marshallyou can use filters (https://docs.datomic.com/on-prem/filters.html) to restrict what a given query can “see”#2018-03-2217:18folconThank you, I’ll give that a read 🙂#2018-03-2217:42folconI might be doing this completely wrong, but trying to reproduce the filtering technique is giving me odd results:
q*:
[:find ?e ?ent ?doc ?f_db
:in $plain ?filterfn
:where [(datomic.api/filter $plain ?filterfn) ?f_db]
[(datomic.api/entity ?f_db ?e) ?ent] [$plain ?e :db/doc ?doc]]
args:
[{:db/alias "sql/test"} (fn [_ datom] (< 20 (.e datom)))]
As a sanity test, I’m trying to see if I can filter all the :db/doc strings that have an entity id lower than 20.
Which is still giving me results such as:
?e ?ent ?doc ?f_db
8 {:db/id 8} "System-assigned attribute set to true for transactions not fully incorporated into the index"
I’m probably constructing this query completely wrong.#2018-03-2218:08folconOk, so from what I can tell, the query language cares what the names of the variables are, and you can’t define a database and reuse it as you’ll have a var that starts with a ? not a $.
java.lang.Exception: processing rule: (q__355 ?e ?ent ?doc ?f_db), message: processing clause: [?f_db ?e :db/doc ?doc], message: :db.error/invalid-data-source Nil or missing data source. Did you forget to pass a database argument?
#2018-03-2218:18marshallthe entity with EID 8 does pass that filter#2018-03-2218:19marshall@folcon ^#2018-03-2218:19folconsorry?#2018-03-2218:19marshallthe entity ID you found there is ‘8’, which is < 20#2018-03-2218:19folconoh, am I checking a string?#2018-03-2218:19marshallno#2018-03-2218:19marshallentity id is a long#2018-03-2218:20marshallsorry i misread your filter function; one second#2018-03-2218:20folconI’m pretty sure (< 20 8 ) is false?#2018-03-2218:21folconSorry, it’s been a long day 🙂#2018-03-2218:25marshallahh. i think i see#2018-03-2218:25marshallyou need to get a filtered value of the db as an input to the query#2018-03-2218:25marshallone sec#2018-03-2218:28folconsure :)…#2018-03-2218:31marshallso the functionality you’re looking for doesnt require a join on 2 dbs#2018-03-2218:31marshall(d/q
'[:find ?e ?doc
:in $
:where
[?e :db/doc ?doc]]
(d/filter (d/db conn) (fn [_ datom] (< 20 (.e datom)))))
#2018-03-2218:31marshallfind all the entities in a filtered db#2018-03-2218:31marshallfor perf reasons, you could join against a non-filtered db#2018-03-2218:31marshall(which is what the example shows#2018-03-2218:34marshalli’m not sure if/how you’d do the filter inside the query body and then also bind it to a datasource#2018-03-2218:35marshalli have to run, but i’ll be back a bit later#2018-03-2218:35folconSo wait, do I need to make two queries to the rest api? I’m trying to understand how to translate that query, my two args that I can use are the q* and args. Args needs to be at least [{:db/alias "sql/test"}] from what I understand and as far as I can work out I can’t pass a filtered db as I have no idea how to reference it…#2018-03-2218:35folconthanks#2018-03-2218:35folconI’ll be at it for a bit :)…#2018-03-2218:40marshallI suspect you may not be able to pass an arbitrary filtered database to the rest API#2018-03-2219:45folconThat’s frustrating#2018-03-2220:10stijn@marshall regarding the issue above on AWS credentials, we are using IAM role assumption to access different aws accounts for dev, staging, prod. In order to make this work, you need to add a dependency [com.amazonaws/aws-java-sdk-sts "1.11.210"]. Not sure if you want to add this to the datomic client library or mention it in the documentation, but debugging the problem was a bit annoying since d/create-database swallows the original error of the aws sdk#2018-03-2220:10stijnwhich was#2018-03-2220:10stijn(.getCredentials (com.amazonaws.auth.profile.ProfileCredentialsProvider.))
ClassNotFoundException com.amazonaws.services.securitytoken.internal.STSProfileCredentialsService java.net.URLClassLoader.findClass (URLClassLoader.java:381)
#2018-03-2220:11stijnsome more info: https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/securitytoken/internal/STSProfileCredentialsService.html#2018-03-2220:11stijnit works for me now, but you can maybe help other customers with that info 🙂#2018-03-2220:12johnjis java 9/10 not supported for the datomic cloud client?#2018-03-2220:20folconHuh, you can introspect the db enough to get the database credentials by querying it.#2018-03-2220:34Alex Miller (Clojure team)@lockdown- you seem to be assuming it’s not - any reason why?#2018-03-2220:40Alex Miller (Clojure team)I haven’t tried it, but one problem that arises in several places right now is due to the removal of javax.xml.bind from the default classpath. Using --add-modules java.xml.bind on the jvm will add it back in.#2018-03-2220:42marshall@stijn Thanks for the heads up - I will see if I can find a place in the docs where that would fit well#2018-03-2220:49johnj@alexmiller correct, adding java.xml.bind fixes it, was curious if anything greater than java 8 was discouraged by datomic devs.#2018-03-2220:51Alex Miller (Clojure team)that was just a guess, would be curious if it’s a datomic lib dep or something else that you’re running into#2018-03-2220:53johnjthe datomic client eats the stacktrace I think but per your clj -J-verbose:class advice it might be the aws sdk version but not sure, didn't dig more#2018-03-2220:56Alex Miller (Clojure team)probably a good thing for @marshall to know if he doesn’t already#2018-03-2221:00marshallyes indeed. thanks!#2018-03-2221:00marshalli will also look at adding that to docs#2018-03-2221:09johnj@marshall indeed is discouraged? using something greater than java 8#2018-03-2221:09folcon@marshall I have got the filtered db in the query though and I can inspect it:
[:find ?e ?ent ?doc ?f_db ?prs :in $plain ?filterfn :where [(datomic.api/filter $plain ?filterfn) ?f_db] [(keys ?f_db) ?prs] [(datomic.api/entity ?f_db ?e) ?ent] [$plain ?e :db/doc ?doc]]
?prs
(:id :memidx :indexing :mid-index :index :history :memlog :basisT :nextT :indexBasisT :indexingNextT :elements :keys :ids :index-root-id :index-rev :asOfT :sinceT :raw :filt)
the :filt is
(fn [_ datom] (< 20 (.v datom)))
so this is clearly the filtered database. I just don’t know how to query against it.#2018-03-2221:14folconI’ve been trying to pass a string query, as the api states that’s possible:
[:find ?e ?ent ?doc ?f_db ?prs :in $plain ?filterfn :where [(datomic.api/filter $plain ?filterfn) ?f_db] [(:filt ?f_db) ?prs] [(datomic.api/q "[:find ?e :where [?e :db/doc _]]" ?f_db) ?ent] [$plain ?e :db/doc ?doc]]
but it’s erroring:
java.lang.Exception: processing rule: (q__1170 ?e ?ent ?doc ?f_db ?prs), message: processing clause: {:argvars (?f_db), :fn #object[datomic.extensions$eval1162$fn__1163 0x1b0f3213 "#2018-03-2221:41marshall@lockdown- no, i meant it was useful for me to know. No reason not to use 9 or 10 if that fix works#2018-03-2221:42marshall@folcon I wonder if you can then use the filtered DB in a nested query#2018-03-2221:44folcon@marshall That’s what I’ve been trying to do here -> https://clojurians.slack.com/archives/C03RZMDSH/p1521753296000093, but it doesn’t seem to be working?#2018-03-2221:45marshall[:find ?e ?ent ?doc ?f_db ?prs ?filtecount
:in $plain ?filterfn
:where [(datomic.api/filter $plain ?filterfn) ?f_db]
[(keys ?f_db) ?prs] [(datomic.api/entity ?f_db ?e) ?ent]
[$plain ?e :db/doc ?doc]
[(datomic.api/q '[:find (count ?ents)
:where [?ents :db/doc]]
?f_db) [[?filtecount]]]]#2018-03-2221:45marshalltry that ^#2018-03-2221:45marshalli’m on a phone call or I’d try#2018-03-2221:56folcon@marshall Funnily enough:
com.google.common.util.concurrent.UncheckedExecutionException: java.lang.RuntimeException: Unable to resolve symbol: ' in this context, compiling:(NO_SOURCE_PATH:0:0)
manually calling quote instead of ' gives:
java.lang.Exception: processing rule: (q__1315 ?e ?ent ?doc ?f_db ?prs ?filtecount), message: processing clause: {:argvars (?f_db), :fn #object[datomic.extensions$eval1307$fn__1308 0x1f4747dd "
The string variant doesn’t do much better
[:find ?e ?ent ?doc ?f_db ?prs ?filtecount
:in $plain ?filterfn
:where [(datomic.api/filter $plain ?filterfn) ?f_db]
[(keys ?f_db) ?prs] [(datomic.api/entity ?f_db ?e) ?ent]
[$plain ?e :db/doc ?doc]
[(datomic.api/q "[:find (count ?ents)
:where [?ents :db/doc]]"
?f_db) [[?filtecount]]]]
java.lang.Exception: processing rule: (q__1277 ?e ?ent ?doc ?f_db ?prs ?filtecount), message: processing clause: {:argvars (?f_db), :fn #object[datomic.extensions$eval1269$fn__1270 0x6f32c7f9 "#2018-03-2221:57marshallcopy paste issue with single quote probably#2018-03-2221:58folconNot sure what the casting issue is#2018-03-2221:58marshallgot it#2018-03-2221:58marshall(d/q
'[:find ?filtecount
:in $ ?filterfn
:where [(datomic.api/filter $ ?filterfn) ?f_db]
[(datomic.api/q '[:find ?ents
:where [?ents :db/doc]]
?f_db) [[?filtecount]]]]
(d/db conn) (fn [_ datom] (< 20 (.e datom)))) #2018-03-2221:59marshallbad var names. sorry i’l fix#2018-03-2222:00marshallmore interesting with correct name:
(d/q
'[:find ?filtecount
:in $ ?filterfn
:where [(datomic.api/filter $ ?filterfn) ?f_db]
[(datomic.api/q '[:find (count ?ents)
:where [?ents :db/doc]]
?f_db) [[?filtecount]]]]
(d/db conn) (fn [_ datom] (< 20 (.e datom)))) #2018-03-2222:00marshallfind the count of entities with :db/doc in the filtered db#2018-03-2222:04folconSo the reader can’t deal with the single quote at all, so I’ve been replacing the query with the string version or manually calling the quote function, however there’s a relatively consistent issue of message: clojure.lang.PersistentList cannot be cast to clojure.lang.IFn in both cases.#2018-03-2222:04folconI’m really not sure what the problem is here.#2018-03-2222:40marshallThis is specifically with the rest api?#2018-03-2222:41marshallI'll have to try that tomorrow morning#2018-03-2222:47folconyep, all of the queries I’m running are through the rest api.#2018-03-2312:23folcon@marshall Ok, so I might have figured out a work around. I can address and query different datomic databases, so I’m going to try the model of each user having a separate datomic db on the same store. Is there a flaw with this design?#2018-03-2313:19marshall@folcon how many databases are you thinking?#2018-03-2313:20folconwell in our trial period we might end up with a few hundred users#2018-03-2313:21marshallDatomic On-Prem is designed to have a single primary db behind the transactor. a few ‘housekeeping’ dbs in addition would be OK, but having many dozens of active databases isn’t recommended#2018-03-2313:27marshall@folcon one second - I had a chance to look at the REST api and was able to run the query I wrote yesterday#2018-03-2313:28marshall[:find ?filtecount
:in $
:where [(datomic.api/filter $ (fn [_ datom] (< 20 (.e datom)))) ?f_db]
[(datomic.api/q (quote [:find (count ?ents)
:where [?ents :db/doc]]) ?f_db) [[?filtecount]]]] #2018-03-2313:28marshall@folcon ^ the cast issue was from passing a function as an arg#2018-03-2313:28marshallif you put it inline in the query it works fine#2018-03-2313:31folconHmm, that limitation is rather irritating, here I thought I’d found a the perfect way to ensure user data remained separate while still being able to query across it :(…#2018-03-2313:31marshallyou should still be able to do that; you can parameterize constants inside the function (i believe)#2018-03-2313:31folconok, I’m currently in the middle of something else, but I’ll be able to give that a go in an hour and a half :)… Definitely going to give that a shot. Thank you!#2018-03-2313:32folconI’m concerned that if I can’t pass the function as a parameter I’m going to have to do query mangling with strings/datastructures#2018-03-2313:32marshallwhat language are you coming from?#2018-03-2313:37marshalli believe i was mistaken - you can’t parameterize constants within the nested filter predicate I dont think#2018-03-2313:55folconMy backend is python based#2018-03-2315:37folconfrontend clojurescript 🙂#2018-03-2316:49folcon@marshall It looks like it worked, the query you mentioned here -> https://clojurians.slack.com/archives/C03RZMDSH/p1521811630000098#2018-03-2316:50folconThanks, I’m going to unpack this and see if I can work out how to get the rest of my queries to use this filtering technique :)…#2018-03-2323:09James VickersWhat storage services do most people seem to use for on-prem? Do a lot of you use Cassandra or is it pretty much all SQL?#2018-03-2401:12johnjI have no idea, but my guess would dynamodb#2018-03-2506:50kardanI’m playing with Datomic, was to write my first test using create & delete-database. Can’t I use the client api for that?#2018-03-2521:05a.espolovHello#2018-03-2521:06a.espolovGuys simulant is actual tool for testing db?#2018-03-2600:03Alex Miller (Clojure team)It’s really a tool for testing systems, of which the database is one component#2018-03-2613:51alexkI’ve got a funny thing happening when I call a transaction function#2018-03-2614:08alexkAnswer: make sure the attribute you’re reading is present on the entity. In my case, :myns/counter hadn’t ever been set on that particular entity, and that resulted in addition to nil within the db function#2018-03-2615:55Petrus TheronCan I use d/filter for user-data privacy of present day (not historical) data? E.g. given that every user-related fact belongs to an entity that has an :entity/owner attribute set to that user's ID, can I efficiently filter out all datoms not belonging to that user? Or do I need to apply "own" these facts at the transactional level and filter by tx-meta?#2018-03-2617:01kardanThe /bin/maven-install (datomic-free at least) script could do with a she-bang. Is there a place to report these things?#2018-03-2617:18jaret@kardan if you’d like an absolute path or shebang line added to the script, you could log a feature request on our “suggest a feature” portal. You can get there from your datomic account at https://my.datomic.com/account and clicking on the “suggest a feature” link, top right.#2018-03-2617:25kardan@jaret ok cool. It was nothing huge for me, just noticed that I could not run the script. Still trying to figure out how things hang together.#2018-03-2620:31donaldballA fairly simple question about the #db/fn literal: I’m trying to apply some formatting to our schema.edn file which contains one of these, and the resulting edn is unreadable. My rewrite fn is:#2018-03-2620:31donaldball(defn rewrite-schema!
[]
(binding [*print-namespace-maps* false]
(let [schema (into []
(map (fn [m]
(into (sorted-map)
(remove (fn [[k v]]
(contains? #{:db.install/_attribute :db/id} k)))
m)))
(edn/read-string {:readers *data-readers*}
(slurp "resources/data/schema.edn")))]
(clojure.pprint/pprint schema (io/writer "resources/data/schema.edn")))))
#2018-03-2620:32donaldballIs there a convenient way to have #db/fn roundtrip without evaluation?#2018-03-2620:52joshwolterNew to the #datomic channel here... i have a 43 GB datomic db on PG storage. I run incremental backups every 15 minutes and after a ~10 days, i.e., my backup directory is at 425 GB's. Is there a way to not log garbage-collection (I have a gc job running every 3 hours btw)?#2018-03-2622:02jaret@josh.wolter are you running PG’s vacuum full pg job? It sounds like you’re backing up garbage with each backup. (seems like you knew that already). I’d also be curious to get the output from your diagnostics command:
;;Prints a map with information about a database, storage, the catalog, and peer settings:
bin/run -m datomic.integrity $DB-URI
Substitute your URI ^ for $DB-URI and ensure it is from a machine that is able to reach storage.#2018-03-2622:03jaretIf you’d like you can private message me the output, or we could open a case. Just e-mail me at <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>.#2018-03-2622:04jaretThe output from diagnostics may not be information you’d like to share over slack 🙂#2018-03-2622:05jaret<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> works too 🙂#2018-03-2701:49joshwolterThanks @jaret! I had to run, will do tomorrow morning.#2018-03-2710:32iarenazaIs it possible to have database functions if using the Client API? Does it depend on whether you are using Datomic On-Prem or Datomic Cloud?#2018-03-2720:04stijnIs it possible to run in-mem with the client lib?#2018-03-2720:05stijnI can't seem to find anything in the docs#2018-03-2720:06stijnor is it this? https://docs.datomic.com/on-prem/first-db.html#2018-03-2720:23donmullen@stijn - no in-mem with client lib#2018-03-2720:49donmullenSorry @stijn - was thinking cloud client - Jaret / Marshall obviously correct!#2018-03-2720:23Wes HallAnyone know if the datomic cloud client library works in cljs? Or if there is a cljs port?#2018-03-2720:25donmullen@wesley.hall - no - I believe there is somewhere you can vote for new client libraries - cljs is one people (including myself) have been asking about. There is an issue with security that would need to be addressed.#2018-03-2720:27Wes HallOk, thanks. I will look for the vote. It's only really that I want to access from AWS lambda. I can build the lambda on the JVM but startup time is a bit of a PITA when it comes to JVM lambdas, node is better.#2018-03-2720:29stijnok, no in-mem then, but how do you develop with rapid schema changes when trying out stuff?#2018-03-2720:34marshall@stijn yes, you can run an in memory db with peer server using on-prom and connect to it with client#2018-03-2720:38jaret@stijn
$ bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d baskethead,datomic:
#2018-03-2813:15stijnThanks!#2018-03-2813:33jaretNp! Note that I left in “baskethead” that’s whatever you want to serve the DB as. I copied over from a project where I am in the process of messing with schema and didn’t remove that.#2018-03-2818:00stijn😄#2018-03-2720:39jaret(def cfg {:server-type :peer-server
:access-key "myaccesskey"
:secret "mysecret"
:endpoint "localhost:8998"})
(def client (d/client cfg))
(d/list-databases client {})
(def conn (d/connect client {:db-name "your-db-name"}))
#2018-03-2721:13markbastianHow might I use datomic’s history to get key-value pairs over time? For example, suppose I have a series of weather measurements that look like this:
{:weather/temp 105.0,
:time/toa #inst"2018-01-03T00:00:00.000-00:00",
:db/id 1,
:weather/location "Eagle, ID"}
What I want is to do two things:
1. Get the current weather at a given location. This is easy enough:
(d/pull (d/db conn) '[*] 1)
2. Get a time series of all :time/toa to :weather/temp values. I tried a couple of things:
;;This returns every combination of toa and temp.
(d/q
'[:find ?toa ?temp
:in $ ?e
:where
[?e :time/toa ?toa]
[?e :weather/temp ?temp]]
(d/history (d/db conn)) 1)
;;This returns combinations where the transaction is the same, but doesn't quite do what I want.
(d/q
'[:find ?toa ?temp
:in $ ?e
:where
[?e :time/toa ?toa ?t]
[?e :weather/temp ?temp ?t]]
(d/history (d/db conn)) 1)
Any idea as to the right query? Alternatively, is the right answer to just map over my history and take those pairs?#2018-03-2722:18adammillerI'm building an app that will eventually deploy on Datomic Cloud (or that is the intent). However, my tests I like to spin up in memory db on demand. There are of course slight differences in using the cloud vs. peer api so I was wondering how people typically handle this? I saw Robert's solution in his Clojure bridge code he is building but was wondering what others do in this situation?#2018-03-2723:19Datomic Platonic@adammiller you can use the peer api in production as well, so if you'd like to use the same api for dev and production just still with peer api, that's what we're doing#2018-03-2723:21Datomic Platonics/still/stick#2018-03-2723:32steveb8n@adammiller I’m building my own version of Roberts solution, based on component. basically the same idea, without dynamic binding#2018-03-2723:49adammiller@clojurians873 you can't use the peer api with datomic cloud though, correct?#2018-03-2723:50adammiller@steveb8n I'd be interested in seeing what you come up with. I think whatever end result that can be achieved would be nice to have a very small open source lib we can pull into new projects to handle this. I'd think it would be a common issue.#2018-03-2800:07Datomic Platonic@adammiller correct. we're going to deploy the on-prem version on an ec2 instances backed by postgres, using the datomic-pro maven library bundled with the download. it turns out so much of our code had (datoms ...) and other peer-only features#2018-03-2800:08Datomic Platonicor is it (entitiy ...) instead of datoms, some functions are only avilable on the peer server#2018-03-2800:09adammillerYeah, api is just slightly different in certain situations.#2018-03-2800:10Datomic Platonicso in dev we use mount (its like sierra's component) to start datomic in memory peer, and run our tests etc, knowing it will work in production (but we are stil amateurs!! 😼 )#2018-03-2800:11Datomic Platonicthe other advantage of using on-prem version is that we can start postgres on our laptops and run the full version with tests, etc#2018-03-2800:13adammillerYes, I'm also using mount and I determine by the config currently whether to launch a peer or client connection then use Robert's method basically to wrap the api between client and peer api methods (https://github.com/robert-stuttaford/bridge/blob/master/src/bridge/data/datomic.clj)#2018-03-2800:15Datomic Platonici like robert's approach; it's a nice first try of bridging the gap#2018-03-2800:15Datomic Platonici would have used a multimethod or so#2018-03-2800:16Datomic Platonicbut still would be scared of breaking things deep inside the AWS cloudformation stack#2018-03-2800:17adammillerproblem with multi-method i believe is you would have to pass the mode around (at least if you were thinking of dispatching on that). His method works nice using the with-datomic-mode macro to bind what mode we are working in.#2018-03-2800:20adammillerNone of this would be needed if we could launch an in memory db from code so that we could connect with the client but I don't believe that is possible.#2018-03-2800:26noonianWould be amazing if datomic-free supported the client api#2018-03-2801:59Drew VerleeHow would i approach the abstract problem of walking a graph from leaf to root(s) with datomic. Is it even a good fit for this type of question? I understand its a graph db, but its not clear how well the semantics support this type of thing.#2018-03-2802:00Drew Verleesay n1 -> n2 -> n3 and also n1 -> n4 and just for contrast n5 -> n6
if i handed this thing n1 it would return n1 -> n2 -> n3 and n1 -> n4 but not n5 -> n6#2018-03-2802:03Drew Verleei think component pulls nested components so thats one way to think about tackling this.#2018-03-2802:34chris_johnsonSo, this is off the top of my head pseudocode but you would navigate the “edges” that are ref values from one entity to the other#2018-03-2802:35chris_johnsonI won’t try to come up with a schema on the fly, but your query would look something like#2018-03-2802:36chris_johnsonhm, wait - sorry, I misread your question and was about to answer a different one 🙂#2018-03-2802:37chris_johnsonspecifically, “how do I walk from root to leaf of a structure I know”, where I now think you’re asking “how do I find all the leaves and paths to them from a given root without knowledge of the structure beforehand”#2018-03-2813:10Drew VerleeNot quite. I’m saying, given a leaf, how do you walk back to the root.
Put another way, given X find all the things that X depends on.
I have that information in a relational format:
item | deps on
X | Y
Y | Z
But i’m trying to find the right data structure for storing it for my purpose. Which is answering questions like given X walk backwards to all its deps. Then reverse that list:
so foo(X) => [Z Y X] where the order represents that Ni + 1 depends on Ni#2018-03-2813:11Drew Verleedatomic or datascript might not be an ideal way todo this. But if it is, then it might offer some advantages to storing and querying that data.#2018-03-2816:52Drew VerleeIn fact, after some thought, its easy enough to express this with hashmaps…#2018-03-2804:14steveb8n@adammiller no problem. Once I’ve tested it in my codebase, I’ll be happy to extract a micro-library.#2018-04-1204:20bmabey@U0510KXTU I just saw this thread... did you ever extract a micro-library?#2018-04-1204:57steveb8nSure did https://github.com/stevebuik/ns-clone#2018-03-2821:26madstapThe first link on this page is wrong https://docs.datomic.com/cloud/time/log.html#2018-03-2913:20jaretThanks for catching that! I’ll correct that today.#2018-03-2912:06stijnI understand that the entity API is not available in the Client API (and hence Datomic Cloud). Is this something that will be added later on (i.e. peers on datomic cloud) or is it sure this will never be part of the cloud solution? Reason I'm asking is that the Entity API is a great match with graph query systems like GraphQL, om.next, qlkit.#2018-03-2913:22Alex Miller (Clojure team)pull is generally a better match with Client#2018-03-2913:27jaret@stijn The entity api was not well-suited for wire protocol, and wasn’t included in the client API. and as @alexmiller indicated, pull is the client alternative.#2018-03-2913:29jaretThe entity api due to its laziness is chatty and therefore a misfit over the wire.#2018-03-2913:31stijnOK I understand.#2018-03-2913:32stijnIs there a specific reason that peers could not be part of a Datomic Cloud installation (in the future)?#2018-03-2913:34stijn(just asking because we need to make a choice between an on-prem or cloud installation)#2018-03-2914:25robert-stuttaford@stijn some nice notes here which may help you Datomic Cloud conj workshop notes https://docs.google.com/document/d/1WhotOK6v0ZkBBc2G6s5BKp8gOEP9pXazik_4hcArZ9o/edit#2018-03-2915:03jaretDatomic 0.9.5697 now available, important security fix for free: and dev: transactors.
https://forum.datomic.com/t/important-security-update-0-9-5697/379#2018-03-2916:27donaldballFWIW the Release Date for 0.9.5697 on the my.datomic downloads page appears to be incorrect#2018-03-2916:29jaretThanks Donald! I must have missed a step in updating the page. I’ll see if I can update that#2018-03-2916:35jaretI’ve updated the date! Thanks again.#2018-03-2919:27alexandergunnarsonSo I've been wondering for a while now — will Datomic support Google Spanner?
It seems to me that if it supports MySQL, Postgres, and Oracle, it should in theory be able to (in a way) support Google Spanner, which has an ANSI 2011 SQL interface (admittedly not identical to the respective interfaces of each of the flavors currently supported, but sufficient for Datomic's needs, I would have to assume). Then again it seems that Datomic's single-point-of-failure model would effectively preclude horizontalizability and thus would negate the benefits Google Spanner has over other strongly consistent backends. From what I understand, it's not just about slapping a Datomic-flavored Datalog interface over top of a single (potentially massive) EAVT table; there's (at least) the transaction queue, transaction functions, and datom cache to take into account as well.#2018-03-2919:27alexandergunnarsonSo I've been wondering for a while now — will Datomic support Google Spanner?
It seems to me that if it supports MySQL, Postgres, and Oracle, it should in theory be able to (in a way) support Google Spanner, which has an ANSI 2011 SQL interface (admittedly not identical to the respective interfaces of each of the flavors currently supported, but sufficient for Datomic's needs, I would have to assume). Then again it seems that Datomic's single-point-of-failure model would effectively preclude horizontalizability and thus would negate the benefits Google Spanner has over other strongly consistent backends. From what I understand, it's not just about slapping a Datomic-flavored Datalog interface over top of a single (potentially massive) EAVT table; there's (at least) the transaction queue, transaction functions, and datom cache to take into account as well.#2018-03-2920:35favilaSpanner should work, but it's pricey (vs google's cloud mysql or postgres) and you don't get many benefits from it. It does auto-splitting (sharding) for reads and writes, but the records datomic stores are immutable and trivially memcached-able so who cares.#2018-03-2920:39favilagoogle's datastore or bigtable would be better fits (more akin to amazon dynamodb)#2018-03-2921:51alexandergunnarsonI know what you're getting at @U09R86PA4 (and yes, Spanner is incredibly expensive compared to other options!). However, I think I may have incompletely posed the question. I suppose the use case I have in mind is write scalability (Datomic's primary weak point / intentionally unsupported use case). Spanner is horizontally write-scalable; however, the current single-writer-node implementation of Datomic presents a bottleneck that no blazing-fast backend (DynamoDB comes to mind, but yes BigTable, etc., too) can overcome. I should have asked instead, are there plans to implement a Datomic interface that leverages Spanner's horizontally-scalable ACID guarantees in order to overcome Datomic's limits on write scalability?#2018-03-2922:10faviladatomic's limits on write scalability are inherit in it's single-writer design. I don't see that changing#2018-03-2922:10favilaor, I would be very surprised if it changed#2018-03-2922:31alexandergunnarsonAgreed. I would be very surprised as well — pleasantly so 🙂 But I can at least partially envision how Datomic's features might be implemented on top of Spanner (with the qualification that these are not polished thoughts). For instance:
- A Datomic-flavored Datalog query engine has been built for multiple SQL backends; the code for that could be nearly completely reused for Spanner.
- There could be a single, large EAVT table for all datoms (not accounting for indices of various sorts).
- It seems that, without a single-writer model, Datomic's transaction queue would have to be poll-based, unless there's some analogous push-based mechanism inherent to Spanner (doubt it, but possible). The poll mechanism would require a select of all rows whose timestamp was after the last poll (accounting somehow for the edge case of rows with the same exact timestamp that had been inserted after the last poll).
- Transaction functions could (ostensibly) be implemented using Spanner SQL transactions run on the peer.
- Peer cache creation/maintenance is a non-issue, as it seems not to be dependent on the single-writer model.#2018-03-2922:38alexandergunnarsonDoes that assessment seem reasonable to you?#2018-03-2922:52favilaI worry about the efficiency of that table design#2018-03-2922:53favilaconceivably, you could use spanner's "timestamp" as a transaction id (not a transaction time--that would have to be separate)#2018-03-2922:54favilaand use timestamp bounds for d/as-of (but not d/since)#2018-03-2922:54alexandergunnarsonFair; it's the most dead-simple of course, so that's why I mentioned it. Plus when I talked to Paul DeGrandis at Datomic a while back, he said that's essentially how the Datomic interfaces to the various SQL backends are implemented. (Could have changed though)#2018-03-2922:54favilano, that's not true at all#2018-03-2922:54alexandergunnarsonAh, I had no idea#2018-03-2922:54faviladatomic uses sql as a key-value blob store#2018-03-2922:55alexandergunnarsonHeh that would make a lot of sense given that it seems to me that that's how you'd need to do it in (at least several) NoSQL backends#2018-03-2922:56alexandergunnarsonAh interesting, thanks for the gist!#2018-03-2922:56alexandergunnarsonAnd that makes sense about using the timestamp as a txn ID#2018-03-2922:57favilaI have a feeling you could do it in spanner with careful table and index design, but whatever api layer there is inside datomic now assumes it can use a kv store lazily. I don't know if those internal interfaces are easy to retarget to a storage layer that actually represents everything first-class#2018-03-2922:58favilaevery "transactor" would have to have a copy of the necessary transaction function code, and have to do all contingent reads inside its write transaction and retry if it lost (I think?)#2018-03-2922:59favilaI'm not sure how much actual parallel tx-ability you could get in practice; depends on what those contingent reads are#2018-03-2923:00alexandergunnarsonYeah I'm not sure about the internal interfaces; also out of curiosity what do the ids represent in the schema you sent? I'm trying to mentally map each of those fields to EAVT. id is E, rev is T, map is A, and val is V?#2018-03-2923:01favilano, they are unrelated#2018-03-2923:01favilaid is a uuid for a block, or one of the mutable "pods" that hods references to the head#2018-03-2923:01favilarev is a revision counter, used only for those mutable rows#2018-03-2923:02alexandergunnarsonAh interesting... I had no idea that was the sort of implementation Datomic used under the hood#2018-03-2923:02favilathe "value" of the id is either map (edn) or val (a blob of fressian)#2018-03-2923:02alexandergunnarsonDatascript is very simple by comparison haha#2018-03-2923:02alexandergunnarson(And much slower despite being in-memory)#2018-03-2923:02favilahttp://tonsky.me/blog/unofficial-guide-to-datomic-internals/#2018-03-2923:02favilathat may help#2018-03-2923:02alexandergunnarsonAh yes. That's been on my reading list for a while#2018-03-2923:03favilathis is the same storage layout for all storage backends#2018-03-2923:03alexandergunnarsonThanks for sharing!#2018-03-2923:03favilaso the storage layers don't actually "see" the datoms#2018-03-2923:03favilathey're compressed into binary blobs of fressian#2018-03-2923:03alexandergunnarsonI see; fascinating#2018-03-2923:04favilathe blobs reference each other weakly by id to form a tree structure#2018-03-2923:04favila(weakly meaning the storage layer doesn't know about it)#2018-03-2923:05alexandergunnarsonInteresting; because they're referencing them "beneath" the binary blob compression#2018-03-2923:05favilaso spanner by itself takes care of some of this if you can represent the datoms directly in the storage layer efficiently enough#2018-03-2923:05favilabut just doing this in spanner is pointless#2018-03-2923:05favila(or, spanner provides no benefit over any other KV store)#2018-03-2923:07alexandergunnarsonNot clear why that is yet; is it because read-write-locking transactions become the bottleneck?#2018-03-2923:08alexandergunnarsonI was under the impression that Spanner could handle txn parallelism quite handily but then again I haven't done a deep dive into the docs#2018-03-2923:08favilayes read-write locking#2018-03-2923:08favilayou always need to read the previous tx to prepare the next one#2018-03-2923:09alexandergunnarsonThat makes sense#2018-03-2923:10favilaif you can design the tables in some way that you can "shard" the previous state they need to read, then parallel writes are at least possible#2018-03-2923:10favilaotherwise, you are effectively single-writer anyway, since each txor would just race to tx, but possibly execute their tx multiple times trying to "win"#2018-03-2923:11favilabut think about preserving the :db/txInstant invariant (that it is always increasing)#2018-03-2923:11favilacan you write a spanner sql SELECT that would not cause a retry if another tx interceded?#2018-03-2923:12favilaanyway, got to go, this is interesting though#2018-03-2923:13alexandergunnarsonI don't know enough about Spanner to answer intelligently; my guess is it would be delayed by read-write locking#2018-03-2923:13alexandergunnarsonOr cause a retry; however that "locking" is implemented (spin lock or not)#2018-03-2923:13favilaI think tx fails if any reads were invalidated#2018-03-2923:14favilathen the txor must rerun#2018-03-2923:14alexandergunnarsonYes, very interesting! And I get your point about the monotonicity of the :db/txInstant, but I wonder whether you could just use the Spanner timestamp in place of that?#2018-03-2923:15favilayou can backdate txInstant#2018-03-2923:15favilayou would lose that ability with timestamp#2018-03-2923:15alexandergunnarsonAh there's an issue, yes#2018-03-2923:15alexandergunnarsonHow useful is backdating though?#2018-03-2923:16alexandergunnarsonA DB restore from backup via Spanner should preserve the original timestamps, for one#2018-03-2923:17alexandergunnarsonBut about backdating in general, it seems misleading/inconsistent to say "I'm transacting at time X and recording that it transacted at time Y"#2018-03-3001:27favilaIt’s for imports and creating time indexed views of some other source data. The technique is called “decanting”#2018-03-3001:28alexandergunnarsonHuh, interesting; I can see the appeal of the feature for import purposes but I haven't run into decanting before#2018-03-2919:52souenzzoThere is plans to make EntityMap work with #clojure-spec ?#2018-03-2920:00Alex Miller (Clojure team)there’s a ticket about this, haven’t decided what the course of action will be yet#2018-03-2920:01Alex Miller (Clojure team)https://dev.clojure.org/jira/browse/CLJ-2041#2018-03-3003:16caleb.macdonaldblackTrying to find a solution that does this within the query. I’m aware I could just use (map first result) afterwards.#2018-03-3012:02donmullen@caleb.macdonaldblack if this is on-prem you can do :find [(pull ?e [*]) ...] to get a vector of maps#2018-03-3012:06caleb.macdonaldblackAhh thank you! That's exactly what I was looking for. #2018-03-3119:15James VickersHas anyone ever gotten :db.error/transactor-unavailable Transactor not available when using Cassandra? I'm getting it when trying to submit any transactions over a small size (like 1k Datoms). I didn't have this problem at all with PostgreSQL, though now my transactor and storage service are on different nodes. Anything I should look at to investigate?#2018-04-0209:37igrishaevHi! I’ve got a Datomic Pro Starter Edition account at registered about a year and a half ago. When I try to use an up-to-date Datomic release, the transactor says your License Key cannot be used with that version. My key expires at Sep 21, 2017 and there is no any button or a link to update it. So the question is, how can I use the latest release of Datomic with my account? Thanks.#2018-04-0213:18jaret@igrishaev That’s the intention of Starter. To give users 1 year from signup worth of Datomic use (perpetually) to try it out. The next step would be purchasing pro or continuing to use the versions released prior to your expiration date.#2018-04-0215:06Datomic PlatonicHas anyone needed more than 4GB RAM for the transactor or 4GB RAM for the peer? How many datoms did you have when you reached those limits?#2018-04-0215:21magraHi, I have datomic free on my laptop.
0.9.5697 worked fine, I can't connect to 0.9.5697 though.
I keep getting:
JdbcSQLException Falscher Benutzer Name oder Passwort
Wrong user name or password [28000-171] org.h2.engine.SessionRemote.done (SessionRemote.java:568)
In the .properties file I tried both
storage-datomic-password=my-password
and
storage-datomic-password="my-password"
I tried try to get databases with:
(d/get-database-names "datomic:)
What am I missing?#2018-04-0215:23marshall@magra https://docs.datomic.com/on-prem/configuring-embedded.html#2018-04-0215:24marshallif it’s a remote peer you need to enable remote storage access https://docs.datomic.com/on-prem/configuring-embedded.html#sec-2-2#2018-04-0215:24magra@marshall It is localhost.#2018-04-0215:25marshallyou can access localhost without setting a password (https://docs.datomic.com/on-prem/configuring-embedded.html#sec-1)#2018-04-0215:28magraok. I will settle for that then. Still, should the password in .properties be written with single quotes or just the letters. I followed the manual you mentioned and it is just blank there?#2018-04-0215:29magraSorry, double quotes or just the letters. Single quotes produce a stack trace.#2018-04-0215:29marshallyou shouldnt need quotes#2018-04-0215:29robert-stuttafordyou must actually use “localhost” and not some other name even if it resolves to your local, right?#2018-04-0215:30marshall@robert-stuttaford not sure. I’d have to doublecheck#2018-05-2214:43souenzzoNope. But you can test and feedback us 🙂#2018-05-2215:21val_waeselynckFrankly, if it were me, I would probably not do something so experimental at my client's - or at least make it easy get out of this strategy#2018-05-2215:24Dustin GetzYes, I fear the answer is to tell them not to use Datomic#2018-05-2215:27Dustin GetzHowever I am probably willing to maintain a cljs/js client, but not without coordinating with cognitect#2018-05-2219:04hmaurer@U09K620SG Cognitect is supposed to be open-sourcing th documentation of the client protocol#2018-05-2219:05hmaurerI am not sure when that will be though…#2018-05-2302:40souenzzoHey I just made some snippet's about how to access datomic from javascript (using graalvm)
Important notes:
- :heavy_exclamation_mark: Experimental :heavy_exclamation_mark:
- You dont need to JS "inside" clj/java. You can run a JS file with graal directily (but you will need to setup classpath)
- in the middle of development I realized that it would be easier to use the JAVA API than the clojure API
https://gist.github.com/souenzzo/c4719d45e804767c97f6f5be1bcdd1c5#2018-05-2313:40hmaurer@U2J4FRT2T ah, using graal. nice one!#2018-05-2221:51eraadSome entities have the Stripe-related property, others don’t.#2018-05-2313:18chris_johnsonQuestion re: on-prem and datomic:ddb// uris - is there a way to support STS-mediated IAM roles (e.g., access-key, secret-key, token) using Datomic on-prem, or does the role used by Datomic systems have to be attached to an IAM user with programmatic access?#2018-05-2313:38chris_johnsonI created a forum post about this question too, since that seems to be an emerging best practice: https://forum.datomic.com/t/dynamodb-datomic-ddb-connect-uri-and-aws-sts-roles-can-we-provide-the-token-for-a-keypair/436#2018-05-2314:03chris_johnsonYou know what, I think the main issue here is an abject failure of reading comprehension on my part. I will report back in one (1) Docker build/deploy cycle time.#2018-05-2314:27chris_johnsonYes - it was me misreading the docs and missing the line specifying that if you provide no aws_access_key or aws_secret_key in the URI, Datomic will fetch the credentials from the default chain, which works just fine. 😅#2018-05-2318:17sparkofreasonI'd like to be able to run tests against a clean database for code written for Datomic cloud. When running against the cloud instance I can create/delete databases as needed, but this doesn't seem possible when running a local peer server backed by the mem transactor. Is there a way to start/kill peer servers from code for test purposes?#2018-05-2318:36timgilbertIt should be possible to create and delete databases willy-nilly with the mem:// transactor, our unit test suite does this kind of thing a lot#2018-05-2318:43favilaI don't think it's possible to dynamically change the dbs that a peer server is serving#2018-05-2318:44favilaI also am not sure you can serve a mem db from a peer server anyway#2018-05-2320:52marshallPeer server can indeed run mem dbs. Giving it a mem db URI at startup will cause the peer server to create and serve that mem db#2018-05-2320:52marshall@U066LQXPZ ^#2018-05-2322:13sparkofreasonRight. How can I start/stop peer servers from code?#2018-05-2403:45sparkofreason^^^ @U05120CBV#2018-05-2318:45favilafor the first problem (can't reload peer server's server list) maybe you can figure out how to start the peer server directly (likely it's just a clojure function) and make your own peer process with a "reload" or "change dbs" side channel#2018-05-2318:46favilathis is just a slightly faster and more elegant kill-and-restart#2018-05-2320:28cmcfarlenHello datomic slack. I'm seeing some strange behavior with :db.type/bigint attributes and queries. The issue involves storing values as clojure.lang.BigInt and querying as java.math.BigInteger (using the 'q fn). In the memdb, I have to query the type that I gave. Using a sql storage backend, I must query using java.math.BigInteger regardless. Using the 'entity fn and an ident ref I can query using either type for any storage.#2018-05-2320:30cmcfarlenI can kind of reason about why this might be, but the inconsistency was surprising.#2018-05-2320:32cmcfarlenhttps://gist.github.com/cmcfarlen/33d9a8f7e0f926db7d112326e7523792#2018-05-2320:32cmcfarlenThis code reproduces the issue#2018-05-2415:51adamfreyis there a way to shutdown a datomic cloud client? I've found that when I create a datomic cloud client in a script, my script will hang instead of exiting. I tried to call shutdown-agents but that didn't work#2018-05-2416:06Alex Miller (Clojure team)can you thread dump and see what threads are still alive?#2018-05-2416:10adamfreyyes, but I don't know how to do that#2018-05-2416:10adamfreyI found someone with a helpful blog post: https://puredanger.github.io/tech.puredanger.com/2010/05/30/clojure-thread-tricks/#2018-05-2416:10adamfrey😉#2018-05-2416:12Alex Miller (Clojure team)don’t trust that guy, he’s an idiot#2018-05-2416:13adamfrey(def d-client (datomic.init/init-client (datomic.init/conn-config)))
Reflection warning, cognitect/hmac_authn.clj:80:12 - call to static method encodeHex on org.apache.commons.codec.binary.Hex can't be resolved (argument types: unknown, java.lang.Boolean).
Reflection warning, cognitect/hmac_authn.clj:80:3 - call to java.lang.String ctor can't be resolved.
2018-05-24 12:11:56.637:INFO::main: Logging initialized @12428ms
=> #'price-alerts.query-test/d-client
(shutdown-agents)
=> nil
(prn
(.dumpAllThreads
(java.lang.management.ManagementFactory/getThreadMXBean)
false
false))
#object["[Ljava.lang.management.ThreadInfo;" 0x5ebe1552 "[Ljava.lang.management.ThreadInfo;@5ebe1552"]
=> nil
(on-exit (fn* [] (prn "done.....")))
=> nil
#2018-05-2416:14adamfreyhere's output from my script#2018-05-2416:15Alex Miller (Clojure team)if you’re in a repl, just ctrl-\#2018-05-2416:16adamfreythis is in Stu's transcriptor, but I've noticed the same hanging behavior in all my clj run tasks that start up a datomic client#2018-05-2418:57sparkofreasonFor test purposes, I am able to start/stop peer servers by running/killing the run script and its child java process by calling the OS shell from clojure. It's an ugly solution, probably OS-dependent, and every call to run takes a fair amount of time to complete. Dev/test processes would be facilitated if I could run just one peer server process and create/delete mem DBs programatically.#2018-05-2420:48adamfreyI'm using datomic cloud, not the peer server, so I don't have a run script in my case#2018-05-2421:06matthaveneris that ctrl-\ documented somewhere? I’d never heard of it until today.. wondering if there’s more 🙂#2018-05-2421:48Alex Miller (Clojure team)it’s a jvm thing (ctrl-break on windows, ctrl-\ in *nix)#2018-05-2421:48Alex Miller (Clojure team)I don’t think there are any other standard handlers other than ctrl-c#2018-05-2421:48Alex Miller (Clojure team)and most Clojure repls use ctrl-d to quit (although some use ctrl-c)#2018-05-2512:27jetzajacJust curious. When I go to Datomic with the query (:some-key (entity 43)) it has to return datom with the latest tx possible. Given it uses [e a v t] index for that, does it mean that it will scan entire history of the attr? or th required Datom will be found before the historical one somehow?#2018-05-2512:43souenzzoThere is a EAV Index and this operation should be fast as a hash-map access.#2018-05-2512:45jetzajacEAV includes just recent data somehow?#2018-05-2512:46souenzzoIt's something like "lazy" cache.
In the first access it can be slow, if your DB is larger then your RAM.#2018-05-2601:54sparkofreasonIt occurred during testing, app code creates its own connection while test code uses another. The workaround is to be sure to create the test connection after the app code runs.#2018-05-2614:35mishagreetings, how does licensing work with staging environments? Is there something to read in any detail?
for example, if I have 3 envs: sandbox, qa, production, how should my datomic deploy look like?
1 deploy per env, or single deploy with different DBs within it per env? Or something completely different?#2018-05-2614:37mishaNext, is datoms-limit™ - per "transactor" or per DB "within transactor"?#2018-05-2713:54Dustin GetzStu wrote in 2015: "10 billion datoms is not a hard limit, but all of the reasons above you should think twice before putting significantly more than 10 billion datoms in a single database." https://groups.google.com/d/msg/datomic/iZHvQfamirI/RANYkrUjAEwJ#2018-05-2708:56dominicm@misha a license is for a "system" and includes staging and qa.#2018-05-2708:56dominicmTo be super clear, that means you would have 1 license to cover all 3 environments.#2018-05-2711:59mishaso "system" is my system, not "datomic setup", nice#2018-05-2716:32donaldballDoes anyone know why d/tx-range doesn’t report on the datoms that appear transacted in a new database? In such a database, I see system datoms transacted at times [0 54 56 63].#2018-05-2718:13favilaIt starts at t=1000#2018-05-2718:14favilaIf you want everything you can look at the index for the db/txInstant attribute#2018-05-2718:14favila:aevt#2018-05-2717:56misha@dustingetz thanks, read that one few times. It is just, almost everyone in google groups uses db/sharding/system/nodes/connections /transactors/peers very loosely (or at least db-as-application-data vs db-as-actual-datomic-db-term - interchangeably). And depending on actual meaning – answer's meaning changes dramatically.#2018-05-2718:58bkamphaus@misha the meaning is per database, though several databases adding up to exceed 10 billion datoms behind the same transactor would encounter some perf challenges as well. 🙂 And you’re probably safe assuming when anyone on the Datomic team says “database” they mean database rather than peer, transactor or something else. Precision in use of terminology is definitely a goal there in community support.#2018-05-2720:18misha@bkamphaus thank you#2018-05-2803:02Drew VerleeWhats the idiomatic way to use a previous value as the argument to the next value? Say i want to update all the people with name “drew” to “drew rocks” or “drew” + “something”
6 :person/name “drew rocks” evening
6 :person/name “drew” morning#2018-05-2803:04Drew Verleei can query for the entity id and name then use those in a transact.#2018-05-2808:25val_waeselynck@U0DJ4T5U1 I assume the challenge here is to do it without race conditions? In that case, the way to go is to use a transaction function (https://docs.datomic.com/on-prem/database-functions.html), e.g [[:myapp/replace-first-name "drew" "drew rocks"]].#2018-05-2808:31val_waeselynckIf you're using Datofu (https://github.com/vvvvalvalval/datofu#writing-migrations-in-datalog), you can use the more general :datofu.utils/datalog-add transaction function, which acts as a Datalog interpreter, so that you don't have to create and install a custom transaction function. E.g:
[[:datofu.utils/datalog-add
'[:find ?e ?a ?new-name :in $ ?old-name ?new-name :where
[(ground :user/first-name) ?a]
[?e ?a ?old-name]]
["drew" "drew rocks"]]]
#2018-05-2807:06ttxWhat is the justification for reverse reference of multi-arity "component" attributes returning only a single value in a datomic pull? Is there any way to return all the values? Relevant doc: https://docs.datomic.com/on-prem/pull.html#multiple-results#2018-05-2812:48favilaThe justification is that a component attr should always be the only way to reach its entity value#2018-05-2812:48favilaOtherwise it’s not truly a component#2018-05-2902:24souenzzo@U4XHJ3J9H you can do this
(let [db (d/db conn)
{:keys [db-after]} (->> (d/q '[:find ?op ?e ?a ?v
:where
[(ground :db/retract) ?op]
[(ground :db/isComponent) ?a]
[?e :db/isComponent ?v]] db)
(vec)
(d/with db))]
(d/pull db-after [:your/_pattern] eid))
#2018-05-2906:57ttx@U2J4FRT2T Not sure about what is being achieved by the code snippet. Can you please explain it to me?#2018-05-2907:36favilaThis removes all isComponent annotations, writes it to a local (I.e. forked) db, then pulls from that db#2018-05-2907:36favilaIn essence, temporarily un-isComponent-ing all attributes so a reverse pull will always be cardinality many#2018-05-2908:59ttxThanks!#2018-05-2820:21mishadatomic ♥#2018-05-2820:45mishaIs there an idiomatic way to not do the transaction if key-value did not change? now it does not duplicate fact assertion, but still generates a transaction datom.
It sounds like a tiny optimization, but, for example, for .csv file imports, if I want to have more granular error reports, I would not be able to put all the lines into a single transaction, I'd need to batch them. But that would "waste" 1 datom per batch even if batch changed nothing in DB. Smaller the batches – higher the chance of wasting precious datoms on repetitive (large) files imports. Especially, if I will put some meta info about file import in tx.#2018-05-2820:53mishadoes :db/noHistory reduce datoms count over time, or somehow just reduces space, and that's it?#2018-05-2820:54mishais it backed up by excision under the hood?#2018-05-2820:57sparkofreasonHas anybody successfully accessed Datomic Cloud across peered VPCs? Our app uses another service that requires VPC peering, and the same procedures do not work when applied to Datomic.#2018-05-2820:59hcarvalhoaves@misha maybe you can avoid the empty transaction altogether w/ a transaction function by running a query on the transactor - that could negatively impact your transactor though, possibly more than just having the empty tx#2018-05-2821:03misha@hcarvalhoaves yeah, thought about that, and would need to evaluate incoming files frequency vs amount of empty tx-datoms generated. However, transactions would contain fairly large nested maps, and comparing those with db data inside a transaction would likely be slower than I'd like it to be.#2018-05-2821:28mishaon the other hand, looking at error types which make tx fail, those are either network or dev errors: connection reset/timeout, and invalid value type. Which means, with enough testing, ETL step can collect batch of valid tx data from ~1k rows for a single transaction.#2018-05-2821:33mishaspeaking of errors. I'd be delighted to see actual value, attribute and it's type in ExceptionInfo data map:
datomic.impl.Exceptions$IllegalArgumentExceptionInfo: :db.error/wrong-type-for-attribute Value 1 is not a valid :bool for attribute :foo/bar?
data: #:db{:error :db.error/wrong-type-for-attribute}
From the error above you cannot tell that 1 was in fact string "1".#2018-05-2822:28Peter WilkinsHi all, just learning Datomic. Having a lot of fun. However have hits 2 queries I need a bit of help with:
1: get name and id for topics not in category
(defn orphaned-topics [conn]
(d/q {:query '[:find [(pull ?topics [:topic/name :topic/id]) ...]
:in $ ?taxonomy
:where [?t]
[?tax :taxonomy/id ?taxonomy]
[?tax :taxonomy/categories ?cats]
(not [?cats :category/topics ?t])]
:args [(d/db conn) "z5ojxcs40azi"]}))
Error message is “Only find-rel elements are allowed in client find-spec”. I don’t understand what a find-rel is.
2: full text search returns a 500 server error after a long delay
(defn search-keywords [conn query]
(d/q {:query '[:find ?entity ?name ?tx ?score
:in $ ?search
:where [(fulltext $ :keyword/phrase ?search) [[?entity ?name ?tx ?score]]]]
:args [(d/db conn) query]}))
CompilerException clojure.lang.ExceptionInfo: Server Error {:datomic.client-spi/request-id “f90939d2-b6f2-4c55-a8e3-18af3fa7e0b5", :cognitect.anomalies/category :cognitect.anomalies/fault, :cognitect.anomalies/message “Server Error”, :dbs [{:database-id “0ed6aab0-5e31-400f-8fd7-dc40dc67df98", :t 11, :next-t 12, :history false}]}
Relevent schema:
#:db{:ident :category/id, :valueType :db.type/string, :cardinality :db.cardinality/one :unique :db.unique/identity}
#:db{:ident :category/name, :valueType :db.type/string, :cardinality :db.cardinality/one}
#:db{:ident :category/topics, :valueType :db.type/ref, :cardinality :db.cardinality/many}
#:db{:ident :category/weight, :valueType :db.type/float, :cardinality :db.cardinality/one}
#:db{:ident :keyword/excludes, :valueType :db.type/string, :cardinality :db.cardinality/many, :fulltext true}
#:db{:ident :keyword/phrase, :valueType :db.type/string, :cardinality :db.cardinality/one, :fulltext true}
#:db{:ident :taxonomy/categories, :valueType :db.type/ref, :cardinality :db.cardinality/many, :isComponent true}
#:db{:ident :taxonomy/editable, :valueType :db.type/boolean, :cardinality :db.cardinality/one}
#:db{:ident :taxonomy/id, :valueType :db.type/string, :cardinality :db.cardinality/one :unique :db.unique/identity}
#:db{:ident :taxonomy/name, :valueType :db.type/string, :cardinality :db.cardinality/one}
#:db{:ident :taxonomy/organization, :valueType :db.type/ref, :cardinality :db.cardinality/one}
#:db{:ident :taxonomy-input/categories, :valueType :db.type/ref, :cardinality :db.cardinality/many}
#:db{:ident :taxonomy-input/name, :valueType :db.type/string, :cardinality :db.cardinality/one}
#:db{:ident :topic/document-count, :valueType :db.type/long, :cardinality :db.cardinality/one}
#:db{:ident :topic/id, :valueType :db.type/string, :cardinality :db.cardinality/one, :unique :db.unique/identity }
#:db{:ident :topic/keywords, :valueType :db.type/ref, :cardinality :db.cardinality/many, :isComponent true}
#:db{:ident :topic/name, :valueType :db.type/string, :cardinality :db.cardinality/one, :fulltext true}
#:db{:ident :topic/type, :valueType :db.type/ref, :cardinality :db.cardinality/one}
#:db{:ident :topic-type/company}
#:db{:ident :topic-type/risk}
Thanks for reading!#2018-05-2902:28souenzzo1- pass db, not conn to your functions. db is immutable
2- pull ?topics but mathing :category/topics ?t
3- not sure if fulltext is avaible at datomic cloud#2018-05-2911:26chrisblomWhat are your experiences with storing timeseries in datomic?#2018-05-2912:36dominicm@chrisblom if you have a lot of it, it's not ideal.#2018-05-2912:38chrisblomyeah i’m finding out that its not a great fit, we currently have a database that keeps on growing, with no good way to remove old data#2018-05-2912:39dominicmI think Nubank do something where they create new databases regularly, but that depends on a complex aws setup from what I recall.#2018-05-2912:40chrisblomi’m also looking into the timescale plugin for postgres, but no AWS RDS support yet unfortunately#2018-05-2913:05Christian JohansenIf you’re on AWS, might as well stick timeseries in Dynamo?#2018-05-2913:37matthavener@chrisblom if you want a “sliding window” snapshot of the data, you can store it in kafka or some unindexed store, and then load it into a memory datomic db at fixed intervals#2018-05-2913:37chrisblomthanks, i’ll look into Dynamo#2018-05-3007:43gavanitratehey folks, does anyone know if it is possible to perform a pull expression on an aggregate function? i.e.
'[:find ?pc (pull (distinct ?c) [:company/name])
:in $
... ]
#2018-05-3012:50stuarthalloway@gavanitrate no, pull takes entities only#2018-05-3015:17Drew VerleeI feel like it would be useful to expand http://www.learndatalogtoday.org/ to have more examples and show more options. For example, the pull api. Does anyone know if the maintainer takes pull requests or who to talk to about that?#2018-05-3015:20val_waeselynckWhy not ask him directly? It does say PRs/feedback welcome https://github.com/jonase/learndatalogtoday#feedback-welcome#2018-05-3015:22Drew Verleegood point. I suppose i should have put the emphasis on the first part. I’m more curious if people think it should be expanded. I personally find it hard to learn datomic without working through the examples. I wonder if maybe i should be trying to understand it through the grammer.#2018-05-3016:10misha@U0DJ4T5U1 https://github.com/Datomic/day-of-datomic/tree/master/tutorial might be useful for you then#2018-05-3016:50Drew Verleethanks @U051HUZLD#2018-05-3113:43mishacan I declare a datomic database-function with more than 1 arity?#2018-05-3113:52val_waeselynckI don't think so, but you'll be fine using collections to achieve the same objective#2018-06-0110:34mishacan I pass _ as an argument to d/q? to avoid explicitly implementing extra arity in cases like:
(defn f
;; what I want to write:
([db a] (f db a '_))
;; what I have to write:
([db a] (d/q '[:find [e?...] :in $ ?a ?v :where [?e ?a]] db a))
([db a v] (d/q '[:find [e?...] :in $ ?a ?v :where [?e ?a ?v]] db a v)))
#2018-06-0111:42souenzzo(defn f
[db & args]
(let [[_ & syms
:as frags] (into '[?e] (for [i args
:when (not (nil? i))]
(gensym "?arg-")))
query (into '[:find [?e ...] :in] (concat syms [:where]))]
(apply vector (conj query frags) db args)))
#2018-06-0116:02mishano kappa
the actual query I need it for is not that much larger than the one in example, and I choose readability over the spell you suggested, @U2J4FRT2T
however, thank you : )#2018-06-0110:35mishaI know that pull can accept "*" as a string, but neither "_" nor '_ seem to work here ^^^#2018-06-0110:53mishais :db.install/valueType "exposed" to datomic users? Did anyone try install any composite types yet? can't seem google anything related#2018-06-0112:43Dustin Getz@U051HUZLD what are you trying to do?#2018-06-0116:07misha@U064X3EF3 I think I am asking exactly that.
@U09K620SG the use case is usual – to put something in db without forgetting to pr-str read-string. But in this particular case, I just stumbled upon it and wanted to explore.#2018-06-0116:11Alex Miller (Clojure team)yeah, datomic attribute types are fixed (for now at least). Clojure certainly makes it possible to consider extensible types at a future point though.#2018-06-0112:42Alex Miller (Clojure team)It’s not extensible if that’s what you’re asking#2018-06-0115:20bjIs it possible to include the transaction time of an attribute in a pull?#2018-06-0115:44Alex Miller (Clojure team)I think you have to use query to get to the transaction component and its attributes#2018-06-0208:50emil0rIs it possible to start up an instance of datomic free from inside an application? Ie, I don't spin one up with the provided scripts, but do it from inside the application#2018-06-0216:49ezmiller77@emil0r I am not sure I understood you question but I think you do need to run datomic as a service separately, though you could probably write a script to automate that process somehow.#2018-06-0216:50ezmiller77Does anyone know why datomic cloud no longer has the d/squuid func? Is it no longer needed?#2018-06-0222:40val_waeselynckNot needed since adaptive indexing. Wish they updated the docs about that.#2018-06-0304:26ezmiller77Thanks#2018-06-0311:18Andreas LiljeqvistIs this true for onprem as well?#2018-06-0314:20ezmiller77I put the question on the Datomic forum. We could document it there to some extent.#2018-06-0314:20ezmiller77https://forum.datomic.com/t/why-no-d-squuid-in-datomic-client-api/446#2018-06-0317:26val_waeselynck@U7YG6TEKW yes true for on prem as well#2018-06-0222:30cjsauerI think I remember this being discussed here before, but does anyone else have trouble with the Datomic Cloud SOCKS proxy timing out or otherwise acting unreliably in the face of frequent REPL reloads? I regularly have to restart the proxy process in order to reconnect to Cloud.#2018-06-0304:25ezmiller77@cjsauer I've been experiencing that as well. It seems to close out periodically. One could wrap it in some sort of service to restart it when it crashes.#2018-06-0401:55ezmiller77Hi all, I've been struggling with what seems to be a dependency conflict problem between Datomic Cloud and ring. At least, it first presented itself in that guise. Now I'm less sure, but it's an error that appears when d/client is called. I've created a branch on a test repo to show what I mean: https://github.com/ezmiller/datomic-ring-dep-conflict/tree/exlusions-from-datomic-cloud. The errors that arise still smack of a dep conflict in the sense that there's a missing class: java.lang.ClassNotFoundException: org.eclipse.jetty.client.HttpClient, compiling:(cognitect/http_client.clj:1:1) and Caused by java.lang.ClassNotFoundException org.eclipse.jetty.client.HttpClient`. The full stack trace is in the README in the repo.#2018-06-0401:58Alex Miller (Clojure team)This is doc’ed on the Datomic faq page I think #2018-06-0401:59Alex Miller (Clojure team)https://docs.datomic.com/cloud/troubleshooting.html#dependency-conflict#2018-06-0401:59ezmiller77@alexmiller I think you are referring to the troubleshooting section referencing the jetty dep conflict? This: https://docs.datomic.com/cloud/troubleshooting.html#dependency-conflict#2018-06-0402:00ezmiller77Right. Yeah. In that branch of the repo, I've got those exclusions added. The error happens when you call d/client.#2018-06-0402:02Alex Miller (Clojure team)Hmm, well that’s more than I can diagnose on my phone :)#2018-06-0402:03ezmiller77🙂 So far it's been more than I can diagnose at all!#2018-06-0402:03ezmiller77Wasted at least 6 hours on this today.#2018-06-0402:04Alex Miller (Clojure team)If you exclude Jetty don’t you need to include it somehow ?#2018-06-0402:05Alex Miller (Clojure team)If you lein deps :tree what’s including jetty?#2018-06-0402:07ezmiller77Both datomic.cloud and ring reference parts of jetty normally, which creates the dependency conflicts. My understanding is that the exclusions are placed on one side to defer to the inclusions by the other package. In this case, the recommendation in the troubleshooting doc is I think deferring to the versions included by ring. Part of this, also, I gather, is that the way these packages work you can only have one version dependency in a project since they all somehow exist in a global space. (I'm not sure about this but I gathered it from a comment at the end of this thread: http://discuss.purelyfunctional.tv/t/how-to-detect-and-workaround-dependency-conflicts/516/4).
Here's the relevant part of lein deps :tree with the exclusions applied:
...
[ring "1.7.0-RC1"]
[ring/ring-core "1.7.0-RC1"]
[clj-time "0.14.3"]
[commons-fileupload "1.3.3"]
[commons-io "2.6"]
[crypto-equality "1.0.0"]
[crypto-random "1.2.0"]
[ring/ring-devel "1.7.0-RC1"]
[clj-stacktrace "0.2.8"]
[hiccup "1.0.5"]
[ns-tracker "0.3.1"]
[org.clojure/java.classpath "0.2.3"]
[org.clojure/tools.namespace "0.2.11"]
[ring/ring-jetty-adapter "1.7.0-RC1"]
[org.eclipse.jetty/jetty-server "9.2.24.v20180105"]
[javax.servlet/javax.servlet-api "3.1.0"]
[org.eclipse.jetty/jetty-http "9.2.24.v20180105"]
[org.eclipse.jetty/jetty-util "9.2.24.v20180105"]
[org.eclipse.jetty/jetty-io "9.2.24.v20180105"]
#2018-06-0405:27ezmiller77What seems to be a solution was provided by @shohs on the Datomic Forum: https://forum.datomic.com/t/dependency-conflict-with-ring/447/4?u=emiller#2018-06-0405:28ezmiller77The exclusions suggested in the "Troubleshooting" text did not work. Removing them and then adding [org.eclipse.jetty/jetty-server “9.3.7.v20160115”] as a top-level dep does. At least so far...#2018-06-0408:52jumar@ezmiller77 I tried to diagnose your problem a bit more and I think you can solve it by explicitly using newer version of jetty-server and jetty-client explicitly.
See my answer here: https://stackoverflow.com/a/50676715/1184752
Also the related change: https://github.com/ezmiller/datomic-ring-dep-conflict/pull/1/files#2018-06-0408:55jumarI've been following official datomic cloud tutorial which is pretty good. However, I've struggled a bit with following
#:cognitect.anomalies{:category :cognitect.anomalies/forbidden,
:message
"Forbidden to read keyfile at ************/juraj-datomic-cloud/datomic/access/admin/.keys. Make sure that your endpoint is correct, and that your ambient AWS credentials allow you to GetObject on the keyfile."}
#2018-06-0408:58jumarEventually, I've found that I can specify :creds-profile in datomic client config, but found that only by reading source code.
Although it was related to credentials profiles which datomic cloud's documentation doesn't use I think it would be useful to mention that in the documentation because it's pretty common to use profiles.#2018-06-0413:05marshallYou should be able to use any standard method of AWS credential management#2018-06-0413:05marshallas indicated here https://docs.datomic.com/cloud/getting-started/connecting.html#access-keys#2018-06-0413:05marshallthe environment you run in must have proper IAM credentials#2018-06-0413:05marshall(envars, aws profile)#2018-06-0413:12ezmiller77@jumar I was also able to get in with IAM. Did you grant access to the IAM group for SSH?#2018-06-0413:12ezmiller77Oh I see @U05120CBV already pointed you to the relevant section of the docs.#2018-06-0409:07jumarI'm evaluating Datomic cloud (Solo) and using socks proxy for connecting to the database.
I'm suffering from frequent connection errors (socks proxy connection being broken every ~10 mins). Did you encounter such problems before or is there some issue in my network?#2018-06-0412:52ezmiller77@jumar: that solution, specifying the server, worked for me as well. How did you think to try specifying the server? Was it named at some point in the lein deps :tree output? Or did you work it out somehow? I'm curious to know as I tried so many combinations, but never saw the jetty-server named...#2018-06-0413:32jumarIt's a transitive ring dependency therefore you should see it in lein deps :tree output. I had been already thinking about specifying jetty server deps in project.clj explicitly because that's one way how you enforce proper versions to be used in your project effectively overriding transitive dependencies.#2018-06-0412:54ezmiller77Regarding your trouble with the broken socks proxy connection, I am also experiencing the same behavior. It troubled me but I hadn't gotten to the point where I had the luxury of considering what to do about that. I thought I might use some sort of service that restarts something when it fails. Can't remember the names off-hand.#2018-06-0413:00jaret@jumar @ezmiller77 re: connection errors. We don’t see that happening. I do recall a previous user reporting something similar and they used autossh to get around laptop sleeps etc.#2018-06-0413:01jaret>Autossh for keeping alive the socks proxy:
>Not sure who to message with this, but I have a suggestion.
>I’m using datomic cloud and developing against it, which basically means a long running datomic-socks-proxy process. This was quite painful due to frequent timeouts and disconnects, causing me to having to keep jumping across and restarting it.
>I installed autossh instead and hacked the script to use this, and it is now much more stable (and survives sleeps of my laptop). I wonder whether it might be worth having the standard script check for the installation of autossh and if found, use that instead (and maybe print a message to the user if not found, before continuing with the regular ssh client).
>For anybody interested in my little hack, I just commented out the ssh command at the bottom of the script, and added the autossh one. Like this...
`#ssh -v -i $PK -CND ${SOCKS_PORT:=8182} #2018-06-0414:28cjsauerThanks for this. I've saved your suggestion as a gist for future reference: https://gist.github.com/cjsauer/01b288a7e6fe306372b90d1930575836#2018-06-0413:01jaretThat was their suggestion, I have not tested it ^#2018-06-0413:36conanIs it possible to add an entity reference in a trasaction by using a lookup ref?#2018-06-0413:37conanso for example (d/transact db-conn [{:person/name "conan" :person/team [:team/id 123]}]) if i want to add a ref to a specific team entity to a person#2018-06-0414:05griersonIn Datomic how would I model a Runner's time during a race? for example "Alice" started at 10:35 but is still currently running. Would I have a :end nil then (update :end (now)) when she finishes?
But then I need to ask question about the race such as "Who is currently running?" (filter #(nil? (:end %)) runners)#2018-06-0414:22Alex Miller (Clojure team)you can find all runners without an end attribute with something like
(d/q
'[:find ?runner
:in $
:where
[(missing? $ ?runner :end)]]
(d/db conn))#2018-06-0420:41donaldballQuick modeling question: I want to mark attributes as deprecated, indicating they should neither exist nor be asserted in a database. I could justify using a boolean (though there’d be no reason for a false value to ever exist), a long (the t-value of the deprecation), or an instant (the time of the deprecation). Does anyone have any opinions on the best choice?#2018-06-0421:07Alex Miller (Clojure team)Well you can get the t and instant from the transaction that contains the assertion already#2018-06-0510:58souenzzoMy deprecation has 2 values:
why: string. see-also: "ref to many" that points to other attributes. If I wanna the when it's deprecated, I can ask datomic when the first why was written. @donaldball#2018-06-0517:20donaldballI’m exploring using rules to express mildly complex groups of clauses to make my queries somewhat more composable. On one point, the docs aren’t totally clear. If my rule uses a variable symbol that is not part of the rule params, is it entirely distinct from any coincidental uses of that variable symbol in the query?#2018-06-0518:04val_waeselynckYes#2018-06-0518:09donaldballThanks!#2018-06-0601:35donaldballIn the rules documentation, it reads:
> We can require that variables need binding at invocation time by enclosing the required variables in a vector or list as the first argument to the rule.
Is this just a performance hint/requirement, or are there cases where this would be required in order to obtain correct results?#2018-06-0614:21uwocould the backup-db command result in transactor unavailable errors for other peers?#2018-06-0614:41uwonvm. It shouldn’t. we were using a bad uri#2018-06-0615:10zalkyHi all, I'm working on a recursive datomic rule to return all the nodes of a list attached to an entity:
'[[(link ?e ?node)
[?e :head ?next]
(link ?next ?node)]
[(link ?e ?node)
[?e :link ?next]
(link ?next ?node)]
[(link ?e ?node)
[?e :type :node]
[?node :type :node]
[(= ?e ?node)]]]
While this works, it is somewhat slow (~1s) given we know the entity to which the head is attached, and we have only a dozen or so nodes. I'm guess that that third clause is what slows it down. Ideally I would just have that third clause assert (= ?e ?node), but the rule then throw and :db.error/insufficient-binding error. Any ideas how to make this traversal more efficient?#2018-06-0615:13eraserhdIf you know that ?e is bound, the third clause can be [(identity ?e) ?node].#2018-06-0615:14zalkyAmazing! That did the trick.#2018-06-0615:18eraserhdI have some stuff like this, and I just went to look at it, and it doesn't make sense anymore 😄#2018-06-0615:19zalkyHa, live in the moment 😛#2018-06-0615:28zalkyFor posterity, to return just the nodes the the final clause would be:
[(link ?e ?node)
[(identity ?e) ?node]
[?node :type :node]]
#2018-06-0615:29eraserhdIt would be neat if Datomic had a predicate for whether a value is bound.#2018-06-0615:35eraserhdI suppose this can be done with something like, (or (and (not [(identity ?e1) :bad-value]) if-bound...) (and (not (not [(identity ?e1) :bad-value]) if-not-bound...)).#2018-06-0615:40Alex Miller (Clojure team)there’s missing?#2018-06-0615:41Alex Miller (Clojure team)https://docs.datomic.com/cloud/query/query-data-reference.html#missing#2018-06-0615:42eraserhdI think that's different entirely, unless there's a trick for it that I'm ... missing ...#2018-06-0617:48favilausing identity to "rename" a binding is a pretty fundamental technique I've found#2018-06-0617:49favilaI continually run into cases where it's impossible to express the query otherwise#2018-06-0617:51favilathe only downside is it forces the clauses to run in only one direction#2018-06-0617:51favilasome kind of datalog primitive would be required to go in both directions#2018-06-0617:53favilabut writing a query without fixed ideas of what will be bound is impossible in practice. This is especially bad for rules. You can't practically speaking make generic rules that are independent of knowledge of what is bound#2018-06-0617:53favila@zalky You can force a rule to require a var to be bound by surrounding the initial args with a vector: [(link [?e] ?node) ...] for eg#2018-06-0618:29zalkyRight, I forgot about that, thanks for the pointer!#2018-06-0617:54favilabut you can't do [(link ?e [?node]) ...]#2018-06-0617:55favilaso you can't write rules that are polymorphic on what is bound#2018-06-0618:17jaretDatomic Ions are now available. http://blog.datomic.com/2018/06/datomic-ions.html
Datomic Datomic Cloud 397 and Datomic 0.9.5703 are now available#2018-06-0618:42viestiWhoa!#2018-06-0618:44viestiHaving hacked with AWS Lambda & JVM/Clojure, Ions sounds just the thing that has been missing from the Clojure cloud world domination :)#2018-06-0618:44naomarikto resolve enums within pull api is this the general way everyone does it? {:listing/status [:db/ident]}#2018-06-0618:49richhickey@viesti we hope so!#2018-06-0619:26robert-stuttafordhi @richhickey!#2018-06-0619:26richhickey@robert-stuttaford Hi!#2018-06-0619:27Alex Miller (Clojure team)I can’t wait to see what @robert-stuttaford does with ions…#2018-06-0619:27richhickeyIf you were waiting for peer-like features for cloud, ions are that and more#2018-06-0619:27robert-stuttaford:-))) Christmas came twice this year!#2018-06-0619:28robert-stuttafordi’m really looking forward to digging in#2018-06-0619:29viestia serious contender for any future spare time 🙂#2018-06-0619:30richhickeydefinitely looking for feedback on the docs and whether they make the value props and the mechanisms clear#2018-06-0709:49chrisblomhi, while reading the docs i found a mistake.
In the table here <ttps://docs.datomic.com/cloud/ions/ions-reference.html#web-code>, :protocol has as value “HTTP verb as keyword”, i suppose it should be “:http or :https”#2018-06-0619:31richhickeyit's one of those inside-out things much like Datomic was originally#2018-06-0619:31robert-stuttaford“For Datomic On-Prem, we have added classpath functions and auto-require support for transaction functions and query expressions.”
how does this change the current peer behaviour? afaik we were always able to add jars to the transactor’s classpath. perhaps a simple before/after for this change would help illuminate things?#2018-06-0619:31robert-stuttafordright - peer put the database in your app. ions puts your app in the database!#2018-06-0619:31eggsyntax@richhickey I think it would be really useful to have what you said above ("peer-like features for cloud") at the beginning of https://docs.datomic.com/cloud/ions/ions.html -- I hadn't gotten that yet after reading through much of that page.#2018-06-0619:32eggsyntaxThat way the value prop is right up front.#2018-06-0619:36richhickey@robert-stuttaford two ways - you can now use an ordinary classpath fn as a tx fn w/o installing in the db, and both there and in query, any such fully-qualified fns will auto-require the namespaces#2018-06-0619:37robert-stuttafordoh! is there an example of how i’d invoke such a not-installed function from a transaction? right now you have to tie it to an ident. is that still required?#2018-06-0619:38robert-stuttafordif it works the way i think it does, that’s seriously great. i was never comfortable with putting code inside storage like that 🙂#2018-06-0619:38richhickeyjust a fully-qualified symbol#2018-06-0619:38robert-stuttafordthat’s metal#2018-06-0619:41robert-stuttafordthis is probably a question for @marshall or @jaret - does the newest CF template for transactors provide some kind of support for supplying class path functions to the transactor as described here?
https://docs.datomic.com/on-prem/database-functions.html#classpath-functions#2018-06-0619:42richhickeyno#2018-06-0619:42robert-stuttafordso we’re still using your AMI - we’d have to roll our own to take advantage of this feature, then#2018-06-0619:44richhickeythere's a ton of plumbing in Cloud to pull that off, things that can't go in on-prem AMI#2018-06-0619:45richhickeyat this point we really want people on AWS to use cloud#2018-06-0619:58richhickeybut if you want to use on-prem and classpath fns on AWS you have to get your code on the AMI#2018-06-0620:04robert-stuttafordthat makes sense, thanks#2018-06-0620:10viestihttps://docs.datomic.com/cloud/transactions/transactions-functions.html#testing seems to give 404#2018-06-0620:45redingerThe correct link should have been https://docs.datomic.com/cloud/transactions/transaction-functions.html#testing
The link has been fixed in the docs, thanks!#2018-06-0705:52viestithanks for the fix 🙂#2018-06-0620:13richhickeythe ions solution has all the power of code deploy, rolling deploys, rollbacks etc#2018-06-0620:14richhickeyit doesn't cycle the instance, just the process#2018-06-0620:18mitchelkuijpersThis looks absolutely amazing, we are currently running on fargate an were looking into Datomic cloud and lambdas (we are currently on prem). One thing I could not find if there is a solution for listening to the log with ions?#2018-06-0620:19johnjbesides better and more flexible tx functions (really big and needed feature imo) what other peer-like features does ion has? I'm not familiar with on premise peer library just curious.#2018-06-0620:21richhickey@lockdown- essentially the whole model of your app code running in the db context, with cache and query locality, working sets etc. Where the peer was 'put the brain in your app' ions are 'give your thoughts to the datomic brain cluster'#2018-06-0620:22richhickeybut there's more because, unlike with on-prem, we understand the broader execution context in cloud, so e.g. your app auto-scales with cloud#2018-06-0620:23johnjnice, are lambda functions somehow kept warm by the setup?#2018-06-0620:23mitchelkuijpersOr is there another solution to for example push data from Datomic to Elasticsearch?#2018-06-0620:27richhickey@reitzensteinm you could use any logging you want, just put the logging lib in your classpath and grant the cloud node role the needed permissions#2018-06-0620:28richhickey@mitchelkuijpers ^#2018-06-0620:29mitchelkuijpers@richhickey I meant the Datomic tx-log#2018-06-0620:29richhickey@mitchelkuijpers ah, there is no push ATM#2018-06-0620:30mitchelkuijpersAh ok, we currently have a separate process that listens to the tx-log which pushes data to Elasticsearch. Which we absolutely love#2018-06-0620:30richhickey@lockdown- what scenario are you concerned about re: warm?#2018-06-0620:31richhickey@mitchelkuijpers one could imagine an ion callback on txes, could do whatever you like#2018-06-0620:32dominicmhttps://docs.datomic.com/cloud/ions/ions-tutorial.html#push uses -Adev which is inconsistent with the rest of the docs, e.g. https://docs.datomic.com/cloud/ions/ions-tutorial.html#deploy and https://docs.datomic.com/cloud/ions/ions-tutorial.html#monitor#2018-06-0620:34mitchelkuijpersYeah something like that would be awesome, really loving this idea. Deploying apps without managing servers#2018-06-0620:38johnj@richhickey for the JVM one problem is bursts of traffic, where api gateway will be invoking concurrent execution of the lambdas creating more cold starts#2018-06-0620:38Alex Miller (Clojure team)@dominicm should be -A:dev (I should have caught that!)#2018-06-0620:39dominicmI'm an eagle on this stuff. I don't like the -Adev syntax very much, although I accept it is good to be tolerant of.#2018-06-0620:39dominicm(so I had an ulterior motive here, basically)#2018-06-0620:43redingerThis has been fixed in the docs, thanks!#2018-06-0620:39viestiThinking about the Lambda 5min runtime limit, I guess longer processes would be done outside of ions#2018-06-0620:39johnjcreating spikes, I know there are some methods devs use to keep the lambdas warm#2018-06-0620:40richhickey@lockdown- AWS understands the issues re: Java startup and has been improving (keeping alive, freeze/thaw etc)#2018-06-0620:41richhickeyour lambdas our minimal, they proxy to the code on the Datomic cluster#2018-06-0620:43johnjok, definitely trying and testing these#2018-06-0621:38spiedenany timeline on the prem -> cloud migration tool?#2018-06-0621:39spieden> If you are working on committed code with no local deps you will get a stable revision named after your commit.
what if the local deps are in the same git repo? ^^#2018-06-0621:39spiedenions look great!#2018-06-0621:51stuarthalloway@spieden why would you have local deps in the same repo?#2018-06-0621:51stuarthallowayno timeline on migration tool, but I will count you as an implicit +1#2018-06-0622:35csm+1 here, too#2018-06-0621:53spiedenwell, if i want to have a shared library that other components can move in lockstep with#2018-06-0621:53spieden.. then keeping them all in the same repo with separate deps.edn files is convenient and simple#2018-06-0621:55spiedenbeen moving towards this away from snapshot jars and trying to implement cascading builds across multiple vcs projects#2018-06-0621:56spieden@stuarthalloway i got an on-prem license into our budget for this year but now i’m not sure what to do =)#2018-06-0621:57richhickey@spieden a bit objective of the local deps support is it removes the cascading builds/snapshot problem completely#2018-06-0621:57richhickeyyou can deploy you app and one or more libs-in-progress while dev/testing#2018-06-0621:58richhickeyno artifacts needed#2018-06-0621:58spiedenyes it’s very appealing#2018-06-0621:58reitzensteinmit's not every day you get a wrong number call from rich hickey#2018-06-0621:58spiedenhaving a single commit hash version multiple interdependent components is great too#2018-06-0621:58spieden.. which is why i was wondering about: “If you are working on committed code with no local deps you will get a stable revision named after your commit.”#2018-06-0621:59richhickey@reitzensteinm sorry about that, completion-o#2018-06-0622:05spieden(my hope is it would read something like: “If you are working on committed code with no local deps outside the git repo root you will get a stable revision named after your commit”)#2018-06-0622:11spiedeneasy enough to fudge on our own i suppose by passing the hash as revision name anyway =)#2018-06-0622:34spiedeni’ve been wanting to take our step functions processes serverless, and resolving task states to ion-created lambdas in our client lib (stepwise) could be handy#2018-06-0703:54johnjare all features of ion available for solo?#2018-06-0704:36steveb8nhas anyone tested AWS App-Sync using an Ion Lambda? i.e. graphql api for Ion without any code#2018-06-0709:50chrisblomehm, wait no#2018-06-0709:50chrisblomif its like ring, :protocol should be something like “The protocol the request was made with, e.g. “HTTP/1.1".” and :scheme is :http or :https#2018-06-0710:21stuarthalloway@chris.blom thanks! You are right, :protocol should be like Ring#2018-06-0712:17chrisblomOk, it was not immediately clear to me what ion is about:
My understanding now is that:
- its an application server integrated with datomic
- has build in tooling based on deps.edn to deploy based on git revisions
- it integrates with AWS Lambda and API Gateway to handle http requests, in a mostly ring compatible way
Its not clear to me what “deploying your code to a running Datomic cluster” entails:
- What exactly runs on Lambda, and what runs on the datomic cluster?
- What are the limitations of running code in a datomic cluster? Can I access the local disk and other AWS services?
- How does the autoscaling work?
- Is it possible to develop and test ions locally?
- Can I run some sort of test environment for CI testing?#2018-06-0712:23Alex Miller (Clojure team)The picture at https://docs.datomic.com/cloud/ions/ions.html might help#2018-06-0712:30Alex Miller (Clojure team)Essentially all of your code is running on the d cluster. Being there you can access all the aws services. For storage, I think you’d use aws storage services, not disk. Stu or Rich can probably answer some of the others better than I can but generally the answers will be to use the aws functionality for autoscaling, ci, etc.#2018-06-0712:42chrisblomok, so the Lambda functions for an Ion are just glue to interface with the outside world, and delegate the actual work to the Datomic cluster?#2018-06-0712:51andrewhr@chrisblom yes https://clojurians.slack.com/archives/C03RZMDSH/p1528317660000673#2018-06-0712:52chrisblomthanks, good to know#2018-06-0712:52stuarthallowayI care a lot about local dev (Give me REPL or give me death!)#2018-06-0712:53stuarthallowayClient API now supports :server-type :ion, which connects remotely when you dev on your laptop, but connects in memory when you deploy the same code to Datomic: https://docs-gateway-dev2-952644531.us-east-1.elb.amazonaws.com:8181/cloud/ions/ions-reference.html#server-type-ion#2018-06-0712:55andrewhrstu, the transaction producing functions will run on whatever node in datomic cluster, right? But the final transactions are still directed to the node acting as transactor? (so essentially, it’s like the peer model)#2018-06-0712:57richhickey@andrewhr the tx fns run where the txes do. There isn't a dedicated transactor per se as with on prem#2018-06-0712:58richhickeybut you will be able to have independent clusters running app/query code and handling txes#2018-06-0713:05andrewhrI remember something about “avoiding contention”, but in retrospect doesn’t make too much sense giving ddb could probably just autoscale in response. Maybe this image make me a little confused https://docs.datomic.com/cloud/whatis/architecture.html#production-topology#2018-06-0713:07andrewhras far as I understand (together with you previous explanation), query groups aka “extra clusters” will tunnel their transactions thought the primary tx group#2018-06-0713:08andrewhror when do you say “extra clusters” you’re really meaning “one set of storage resources” + “multiple sets of primary compute resources”?#2018-06-0713:27chris_johnson@steveb8n I gave a talk last month at Serverless Chicago about using Datomic Cloud with AppSync, you may expect some kind of preliminary blog post or code sample extending that talk to Ions like …today?#2018-06-0713:28chris_johnsonI am finding it very difficult to focus on my day-job work right now, knowing that I could be spinning up a Cloud instance in my personal account and exploring getting AppSync to run, looking at modeling what $day-job does with the txn report queue in Ions callbacks, etc. etc. so …I don’t think it will be too long before I have a trip report re: AppSync ready for people to read hehe#2018-06-0805:42steveb8nAgreed, lots of people will be interested to know the graphql options using API Gateway on top of Ions.#2018-06-0805:43steveb8nfallback would be Lacinia but App Sync would be better#2018-06-0805:43steveb8nbest would be App Sync subscriptions support. Somehow I doubt that’s possible. What do you think?#2018-06-0713:37richhickey@chris.blom - What exactly runs on Lambda,
a generic proxy. We call it 'Ultimate, the lambda'
- and what runs on the datomic cluster?
everything
- What are the limitations of running code in a datomic cluster? Can I access the local disk and other AWS services?
AWS services sure. It is *your* instance, running in *your* VPC. That said, local disk, probably not a great idea.
- How does the autoscaling work?
You can trigger autoscaling of the cluster on any of various metrics we or AWS produce.
- Is it possible to develop and test ions locally?
As Stu said, sure! The db API you'll see in the ion is the same as the client sync API, and the :ion server type dynamically loads the right back end.
- Can I run some sort of test environment for CI testing?
Yes. You can run a solo instance that is a target of the same application, deploying early revs to it and tested revs to prod.
#2018-06-0713:37richhickey@chris.blom - What exactly runs on Lambda,
a generic proxy. We call it 'Ultimate, the lambda'
- and what runs on the datomic cluster?
everything
- What are the limitations of running code in a datomic cluster? Can I access the local disk and other AWS services?
AWS services sure. It is *your* instance, running in *your* VPC. That said, local disk, probably not a great idea.
- How does the autoscaling work?
You can trigger autoscaling of the cluster on any of various metrics we or AWS produce.
- Is it possible to develop and test ions locally?
As Stu said, sure! The db API you'll see in the ion is the same as the client sync API, and the :ion server type dynamically loads the right back end.
- Can I run some sort of test environment for CI testing?
Yes. You can run a solo instance that is a target of the same application, deploying early revs to it and tested revs to prod.
#2018-06-0713:41chrisblomthanks, that clears things up#2018-06-0713:42chrisblomalso, its the answers i was hoping for 😁#2018-06-0713:38eggsyntax"We call it 'Ultimate, the lambda'"
That's awesomely horrible facepalm 😂#2018-06-0713:46dominicmI suppose if everything runs in the cluster, then there's no way to restrict certain functions to certain operations, as you can do with Lambdas?#2018-06-0713:52jeroenvandijkWe had some bad experiences with AWS lambda in the past: 1. It has a global queue per account (our solution: never use AWS lambda for anything of high throughput as it will block other tasks unexpectedly). 2. AWS requires node updates sometimes (one time within 3 months from launch). @richhickey Are these issues taken into account? Is the ultimate lambda free of these concerns? Thank you.#2018-06-0713:58richhickey@dominicm There are distinct instances of ultimate the lambda and each is an independent AWS Lambda and proxies to a particular fn on a particular Datomic compute group. From there you have all the ordinary wiring up of Lambdas available, to particular events etc.#2018-07-0722:33miridiushow much file system access do Ion functions have? E.g. can I have my schema stored in an EDN file that gets slurped by my Ion code? I assume the resources directory would need to be included in deps.edn?#2018-07-0723:01Oliver GeorgeThere is an example of this here https://github.com/Datomic/ion-event-example/blob/master/src/datomic/ion/event_example.clj#L47#2018-07-0802:19miridiusthanks!#2018-07-0722:38miridiusThe main reason I ask is that to serve a web app from Ions at some point it's going to be necessary to serve static files (e.g images, CSS, compiled CLJS). An alternative would be to split the code to have all the front-end files hosted elsewhere and just use Ions for the API, but then you have a new class of problems around coordinating multiple deployments and multiple domains.#2018-07-0808:48rhansenI would just store static assets in s3. S3 has a builtin http server for serving static files, and that way your datomic system doesn’t waste cycles it doesn’t have to.#2018-07-0723:55orlandowHi, I’m new to datomic and AWS. I’m following the ions tutorial, integrating with slack via the api gateway but I’m getting a base64 encoded body, I can’t find a setting to configure it, I think I have the same code as the tutorial and I followed every step, am I missing something?#2018-07-0800:00orlandowmy ion code is:#2018-07-0800:01orlandow(defn echo* [{:keys [body]}]
{:status 200
:headers {}
:body "hello world"})
(def echo (apigw/ionize echo*))
#2018-07-0800:01orlandowand my config#2018-07-0800:01orlandow:lambdas {:echo {:fn iontest/echo
:integration :api-gateway/proxy
:description "echos input"}}
#2018-07-0800:43orlandowI just deleted and recreated the api and it’s working now ¯\(ツ)/¯#2018-07-0800:43orlandowI must have missed something the first time#2018-07-0818:41oscarMaybe you didn't set the "Binary Payload: */*" setting?#2018-07-0821:09orlandowYes, that’s probably it, thanks, I did see it and changed it but perhaps I didn’t deploy the api afterwards.#2018-07-0801:52Oliver GeorgeHi @stuarthalloway I'm curious why the ion-event-example breaks the schema into keyed chunks and only migrates new chunks. Specifically: https://github.com/Datomic/ion-event-example/blob/master/src/datomic/ion/event_example.clj#L47#2018-07-0801:53Oliver GeorgeThere would be less moving parts without the grouping. Perhaps there are performance considerations with loading the whole schema (even if 95% unchanged) each time. Can see it would be more managable working with a large edn schema file with chunks & would allow migration to report what schema chunks are being loaded...#2018-07-0819:30stuarthallowayhi @olivergeorge ! it is just sample code, I don't have any agenda about how people manage schemas and migrations#2018-07-0819:31stuarthallowayalthough I do have an agenda about "what" and "why" 🙂 http://blog.datomic.com/2017/01/the-ten-rules-of-schema-growth.html#2018-07-0821:03Oliver GeorgeThanks for clarifying#2018-07-0801:56Oliver George(I'd love to see other people's approaches to declaring and migrating schema too.)#2018-07-0803:11steveb8nI’m working this out for myself right now. I’m deploying parts of my app using the “component” lib where each component deploys it’s own schema on start. It could be a bit heavyweight inside an Ion but so far it’s ok. I’ll blog on this later if the pattern holds up as the app grows#2018-07-0811:05eoliphantI’ve used conformity a good bit with on-prem, for schema mgmt. Coming from aeons of using flyway/liquibase/etc in the java/sql world it seemed a good fit. It needs some love to get it working with the client api, been thinking about tackling a pr.
But yeah for now @olivergeorge, I’m just memoizing the load of the whole thing#2018-07-0811:38eoliphantI have a couple questions about the ion-config.edn stuff
The :allow section lists what datomic is allowed to call. The ‘entry points’ if you will. In looking over the starter though, I don’t quite get how a function, that’s not a lambda would ever get executed directly, and need to be ‘allowed’. Are non-lambda funcs just there to take advantage of the namespace loading? If that’s the case maybe it’d be cleaner to have a separate more explicit tag?
Also, the reference describes the ion function signatures. It appears that the transaction and query types are really just recommended conventions, while the lambda and web service types are describing the actual required function signature. Is this the case?#2018-07-0818:45oscarIf I understand it correctly, you need to :allow transaction and query functions. They aren't lambdas but they need to be declared in case an external client called them through a query or transaction.#2018-07-0921:08eoliphantah lol. I’d jumped right into lambdas, etc and totally missed that we now have transaction funcs, etc via ions as well Was really missing the transaction funcs. We use them judiciously, but for some key functionality in for on-prem#2018-07-0818:38eoliphanthi, what’s the story for logging with ions? went poking around in the log for the lambda, then remembered that they’re just the glue. And the system or whatever log just looks like health stuff#2018-07-0818:49oscarI haven't messed around with logging too much, but I would try prepending something searchable like the current namespace and then searching the "datomic-<compute-stack>" log-stream for it.#2018-07-0818:58stuarthallowayYou can use any logging tech you would use for EC2, but stay tuned, help is on the way.#2018-07-0820:40eoliphantI’m fairly certain that ‘stdout’ logging isn’t showing up in the compute stack’s log stream.
Ok @stuarthalloway will try using CWL or datadog or something directly#2018-07-1119:40stuarthalloway@U380J7PAQ ... and help has arrived http://blog.datomic.com/2018/07/datomic-ion-cast.html#2018-07-1119:40eoliphant@whohoo!#2018-07-1200:40eoliphanthey @stuarthalloway just an FYI, looks like the doc page has 2 copies of the same content
https://docs.datomic.com/cloud/ions/ions-monitoring.html#2018-07-1200:43stuarthallowaythanks, will investigate!#2018-07-0819:23oscar@stuarthalloway What version of jackson-core does Datomic Ions use? I found that my code was getting broken only in my deployment because of my dependency on cheshire and its transitive dependency on jackson-(core|dataformat-smile|dataformat-cbor) 2.9.0. Pinning them down to 2.8.11 seems to have fixed it.#2018-07-0819:25stuarthallowayHi @U0LSQU69Z! We just did an update to help people with this problem: https://docs.datomic.com/cloud/releases.html#402-8396#2018-07-0819:26stuarthallowayI am guessing you are on an older release than that one.#2018-07-0819:26oscarYes! Thank you!#2018-07-0822:59eoliphantare there any restrictions on outbound traffic for ions? I’ve looked over the net acl’s and sg’s but didn’t see anything obvious. Trying to shoot my logs over to loggly, but nothing is showing up#2018-07-0900:36Oliver GeorgeI'm thinking through the right place to do schema migrations. The key issue being that a deployment is unstable until schema migrations are run. I've seen examples of doing the schema migration as part of a memoized get-connection helper. Are there other approaches I'm missing? Perhaps there should be a hook to run a schema migrations as part of the code deploy deploy step.
Ref: https://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter.clj#L69#2018-07-0900:38eoliphantyeah some sort of :init-method in the config would be nice for that kind of stuff. But i’m just doing the memoization for now was well#2018-07-0900:48Oliver George@eoliphant the downside I see regarding memoization is the risk that the schema fails at "run time". I'd prefer to the deploy to fail. Could argue it's unlikely given schemas should grow, not break, but human error is a thing. e.g. added a unique constraint but values in db aren't unique.#2018-07-1001:26miridiusYou could write your own deploy code that transacts the schema before running the ions deploy op, perhaps#2018-07-1001:55Oliver GeorgeHi @U0DHVSBHA, yes that would work. For a commit based deploy there is little chance of the local environment (running the migration) being inconsistent. That's the risk I see. Deployment becomes coupled to local dev env and the code pushed. More moving parts. More complexity.#2018-07-0900:51eoliphantyeah I totally agree, but I think that’s going to have to be something they build in. If say the :init-function returns falsy, etc yeah then kill the deploy. Though even that gets interesting, since you could possibly say successfully transact in some schema, but then have something else in the code fail, such that it returns false. So you’d roll back the deployment, but still have made changes.#2018-07-0900:54Oliver GeorgeTrue enough but happily we're talking about an unlikely case given "grow don't break" schema approach. Does require some coding practices reduce the risk of surprises.
Ref: https://docs.datomic.com/cloud/best.html#plan-for-accretion#2018-07-0900:55eoliphantyep, that’s what I do with all my datomic stuff. And similarly one would just have to be disciplined about not doing anything too hinky in this proposed init-method.#2018-07-0900:55Oliver GeorgeCrazy idea would be a schema migration transaction function which runs a suite of tests before committing (if that's even possible).#2018-07-0900:56eoliphantok i’m pulling my hair out right now.. My ionized gw function is base64 encoding my responses lol#2018-07-0900:58orlandowThat happened to me too, maybe this helps:#2018-07-0900:58orlandowhttps://clojurians.slack.com/archives/C03RZMDSH/p1531007700000037#2018-07-0900:56eoliphantah that would be interesting#2018-07-0900:57eoliphanti use conformity for my on-prem stuff. it’s like flyway/liquibase lite. But provides a little structure around the process#2018-07-0900:58Oliver GeorgeThanks, I'll check it out.#2018-07-0901:01eoliphantwell it doesn’t support the client api yet though 😞#2018-07-0901:01eoliphanti was planning to fork it and see if I could add that support#2018-07-0917:26cjsauerI took a stab at this using a small gist. Haven't discussed a PR or anything though.
https://gist.github.com/cjsauer/4dc258cb812024b49fb7f18ebd1fa6b5#2018-07-0901:04oscarWhat I do for migrations is test that all of the queued migrations work using d/with. If I don't throw any exceptions, then I commit the transactions.#2018-07-0901:07eoliphantanyone had this problem? I had this function go a bit wonky on me.
I’ve stripped it down to this
(defn handle-request*
"Handle API Requests"
[{:keys [headers body]}]
(log/debug "here's something")
{:body "body here" #_(json/generate-string {:its "ok"})
:headers {"Content-Type" "application/json"}
:status 200}
#_{:status 200
but in the gateway log, I’m seeing the following (and the encoded value is returned to my client)
(c562e562-8313-11e8-8b30-f3eb1fd30d3f) Endpoint response body before transformations:
{
"body": "Ym9keSBoZXJl",
"headers": {
"Content-Type": "application/json"
},
"statusCode": 200,
"isBase64Encoded": true
}
`#2018-07-0901:08oscar@eoliphant Have you added */* as a Binary Media Type?#2018-07-0901:10eoliphantah hell lol#2018-07-0901:10eoliphanti was having another issue and recreated the gateway#2018-07-0901:10eoliphantforgot to do that, again.. thanks#2018-07-0901:59eoliphantbeen pulling my hair out all day trying to get logging working via a logback appender for loggly. And just realized that most of the typical java ecosystem stuff will probably never work. Since most of it depends on classpath scanning, etc etc, so when your ions deploy all that’s already taken place.. ugh.. Maybe a good use case for modules 🙂#2018-07-0908:06pradyumnahi, is there a preferred strategy to manage data locally in a standalone application, which ordinarily online to access several other datomic databases. When offline the app should be able to still perform with whatever information it has cached. It should be able to store locally some of the work and then try to update the remote databases as applicable. Of course, the issue of conflict needs to be addressed in a sane way. I was thinking may be have a local datomic instance to serve as a cache for multiple remote datomic instances. or is there something better and simpler.#2018-07-0908:26steveb8n@pradyumna take a look at the AWS AppSync javascript lib. It handles all of these requirements for you, including conflict resolution. I’m helping out on a project which hopes to expose Ions using AppSync so it should fit pretty well#2018-07-0908:50pradyumnathanks @steveb8n. i checked this. unfortunately its not exactly fitting in. its clojure (jvm, not javascript)#2018-07-0912:55eoliphantYou’d probably have to implement this yourself @pradyumna like most db’s there’s no explicit support for that use case AFAIK. You could potentially use something like onyx for moving the updates between databases, but you’d be on the hook for conflict resolution ,etc#2018-07-0913:59eoliphantHi, I’m still trying to get some form of logging working. In the course of this I’ve run into another issue. Given what I mentioned previously about commons/sl4j/etc stuff probably never working, I tried creating a custom logger with timbre, that just fires entries via rest in to loggly. I’m using cljs-ajax for this, and it works fine in local dev, but when I call in now, i’m getting a classnotfoundexception for org.apache.http.HttpResponse, so there are presumably some classloader conflicts there. I noticed that the ion-event-example uses some cognitect http-client lib, but I can’t seem to find it in any of the repos#2018-07-0914:08stuarthallowayhi @eoliphant I would stand down, the next release will solve this.#2018-07-0914:09eoliphantah awesome 🙂#2018-07-0914:10eoliphantions are pretty friggin cool. This was the only real nitnoid so far#2018-07-0914:26stuarthallowaythat's great to hear, thanks!#2018-07-0917:51oscarWhen I instantiate a Client, the following is printed:
Reflection warning, cognitect/hmac_authn.clj:80:12 - call to static method encodeHex on org.apache.commons.codec.binary.Hex can't be resolved (argument types: unknown, java.lang.Boolean).
Reflection warning, cognitect/hmac_authn.clj:80:3 - call to java.lang.String ctor can't be resolved.
It returns the Client, though, and it seems like there aren't any issues. I just want to know if this will be a problem.#2018-07-0919:20stuarthallowayhi @U0LSQU69Z what version of Clojure and Java are you running?#2018-07-0919:25oscaropenjdk version "1.8.0_172"
clojure "1.9.0"#2018-07-0919:53stuarthallowaythat won't harm anything, but I will squelch it in a future build#2018-07-0919:58oscarCool. Thanks!#2018-07-0920:26timgilbertHi everybody, I could have sworn I saw a project on here that you could point to a datomic database and get a GraphViz diagram of the schema, but now I can't seem to find it. Anyone remember it?#2018-07-0921:02nilpunningThink I played around with this a while back. https://github.com/felixflores/datomic_schema_grapher#2018-07-0920:58eoliphantHey @stuarthalloway, I think there may be an issue ionized lambdas handling of OPTIONS requests. For this ‘echo’ ion
(defn api-request*
"lambda entry point"
[{:keys [headers body request-method]}]
(try
{:status 200
:body (json/generate-string request-method)}
....
I get “post” “get”, etc just fine but a "message": "Internal server error" for an OPTIONS request, need that since API gateway expects the lambda to respond to the CORS preflight stuff#2018-07-0921:23oscarDo you have an OPTIONS method next to your ANY method in API Gateway?#2018-07-0921:27oscarSomething similar was happening to me because I hit "Enable API Gateway CORS". If you did the same, delete the OPTIONS method and handle it in your Ion. This is happening because AWS matches your OPTIONS request content-type to */* and base64 encodes it. The "Mock Integration" that the preconfigured CORS handler generates expects JSON and throws when it can't parse.#2018-07-0921:41eoliphantyeah that’s exactly what I did#2018-07-0921:43eoliphantthat did it. thanks!#2018-07-0921:44oscarNo problem!#2018-07-1000:30johnjhttps://docs.datomic.com/cloud/whatis/data-model.html#sec-5#2018-07-1000:31johnjis that saying its ok to use :person/name instead of :customer/name or :employee/name ?#2018-07-1000:34johnjand differentiate between customer or employee by other attributes? for ex: :person/department, :person/role for employees.#2018-07-1000:36johnjjust confused if that advice is given for prototyping or is idiomatic to do so#2018-07-1001:30miridiusSeems like Datomic Cloud (with :server-type :ion) doesn't like namespaced database names?#2018-07-1001:41miridiusI also can't seem to list-databases (https://docs.datomic.com/client-api/datomic.client.api.html#var-list-databases) to work 😞
(def cfg {:server-type :ion,
:region "us-east-1",
:system "dev",
:query-group "dev",
:endpoint "",
:proxy-port 8182})
=> #'user/cfg
(def client (d/client cfg))
=> #'user/client
(d/list-databases client)
CompilerException java.lang.IllegalArgumentException: No single method: list_databases of interface: datomic.client.api.Client found for function: list-databases of protocol: Client, compiling:(/tmp/form-init5702408163425337762.clj:1:1)#2018-07-1002:17euccastroI'm having the same error as this user: https://forum.datomic.com/t/issue-retrieving-com-datomic-ion-dependency-from-datomic-cloud-maven-repo/508
except that I can't download the jar at all:
{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "4f2a3c0f2d202c272e"}, :content ("[email protected]")}#2018-07-1002:21euccastrowget works, though, so I'm baffled#2018-07-1002:23euccastroFWIW, this is my version of AWS Tools:
{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "3e5b4d7e5c515d565f"}, :content ("[email protected]")}#2018-07-1002:44euccastroif I either add --no-sign-request to the aws invocation or if I add read permission for arn:aws:s3:::datomic-releases-1fc2183a/* in the IAM group of the user I have credentials configured for, then the aws cp succeeds, but I still get the same error when trying to run clj#2018-07-1003:16euccastroif I rename my ~/.aws/credentials to something else, then last cause in the traceback becomes Caused by: com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain instead of Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 542B5E955147817A; S3 Extended Request ID: (elided), so, unlike in the forum report above, it seems the right credentials are getting picked up by clj#2018-07-1003:17jarethttps://forum.datomic.com/t/issue-retrieving-com-datomic-ion-dependency-from-datomic-cloud-maven-repo/508/6#2018-07-1003:18jaretIf you set your aws resources file in the terminal does aws s3 cp work?#2018-07-1003:19jaretnvm. I see your results#2018-07-1003:49euccastroit seems I was missing at least one other permission, GetBucketLocation for <s3://datomic-releases-1fc2183a> . actually, if I go lazy and allow all S3 ops for all buckets and objects, then clj works. I created a new AWS IAM user for this tutorial, and I only assigned it the datomic-admin-$APP-$REGION policy. I guess most people just use their existing AWS credentials, which have access to everything, and that's why they don't get bitten by this issue?#2018-07-1003:50euccastrobottom line: I think permissions to access the datomic repos should be given to the autogenerated datomic-admin-... policies#2018-07-1003:50euccastroI can continue to pinpoint the exact permissions if that's useful#2018-07-1003:51jaretWere you using an old account for AWS? You have to have an account that supports EC2-VPC#2018-07-1003:52jaretIf your AWS account was prior to DEC 4 2013 it wouldn’t support EC2-VPC#2018-07-1003:52jaretThat wouldn’t be it nvm#2018-07-1003:52jaretI need to look again at what perms are needed#2018-07-1004:00euccastro@jaret: adding these permissions solved it for me:
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::datomic-releases-1fc2183a",
"arn:aws:s3:::datomic-releases-1fc2183a/maven/releases/*"
]
}
#2018-07-1004:25euccastroI got the following error trying to perform the initial push in the ions tutorial:
{:command-failed "{:op :push}",
:causes
({:message
"Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 2AF8C01FF6D0B032; S3 Extended Request ID: (elided)",
:class AmazonS3Exception})}
adding the following permissions fixed it:
{
"Sid": "VisualEditor3",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::datomic-code-3a1b169a-4a28-4693-8e32-891f20e65112/*",
"arn:aws:s3:::datomic-code-3a1b169a-4a28-4693-8e32-891f20e65112"
]
}
#2018-07-1004:31euccastronitpick: I now get the following error when trying to do the initial push. it's obvious that I need to commit the addition of the ion-config.edn, but the tutorial doesn't mention it
{:command-failed "{:op :push}",
:causes
({:message
"You must either specify a uname or deploy from clean git commit",
:class IllegalArgumentException})}
#2018-07-1004:33euccastroanother permissions error. I'm wondering whether I did something wrong in the datomic cloud setup
{:command-failed "{:op :push}",
:causes
({:message
"User: arn:aws:iam::563900263565:user/deitomique is not authorized to perform: codedeploy:RegisterApplicationRevision on resource: arn:aws:codedeploy:eu-central-1:563900263565:application:deitomique (Service: AmazonCodeDeploy; Status Code: 400; Error Code: AccessDeniedException; Request ID: d00b0b01-83f9-11e8-ad19-ad95a71fbe60)",
:class AmazonCodeDeployException})}
#2018-07-1004:46euccastroand then some more in CloudFormation and StepFunction, when deploying#2018-07-1005:15euccastrofinally, I get the following error when trying to invoke the API Gateway endpoint via curl: {"message":"Missing Authentication Token"}#2018-07-1005:19euccastronevermind; I was using the URL as it appears in the Invoke URL, so I was missing the /datomic at the end of the path#2018-07-1005:26euccastrowhat's this /datomic suffix all about anyway? should I just add /datomic at the end of any Invoke URLs exposed via API Gateway, or is that set somewhere (that I missed) in the ion-starter project?#2018-07-1005:28euccastroions is awesome BTW; I was just pointing out points of friction in the tutorial, should that help#2018-07-1006:50Oliver George@euccastro That's a recent bug fix in the tutorial. /datomic can be anything... makes sense if you think about having many routes associated with your endpoint based on request path.#2018-07-1010:15euccastrothanks!#2018-07-1015:58luchiniIf anyone is looking for a super basic, very fast, getting started material for Datomic Ions, I’ve put this together last night: https://twitter.com/tiagoluchini/status/1016698810364461058#2018-07-1015:59eoliphanthi, is it the case that say limit and offset aren’t available in the sync client API?#2018-07-1016:01johnjwhy do you believe that?#2018-07-1016:05johnj@eoliphant https://docs.datomic.com/cloud/client/client-api.html#offset-and-limit#2018-07-1016:07eoliphantyes that’s what I’m looking at. The async api for say q takes a map of the form {:query '[:find ..] :offset .. :limit ..}
The sync api looks like the same list (or map) form as on prem [:find .. :where.. ]#2018-07-1016:10eoliphantyeah it looks like :chunk :offset and :limit are only available for the async api#2018-07-1016:34oscar@eoliphant That's not correct. From the docs "The arity-1 version takes :query and :args in arg-map, which allows additional options for :offset, :limit, and :timeout. See namespace doc."#2018-07-1016:35oscar(arity-1 version (q {:query '[:find ..] :offset .. :limit ..}))#2018-07-1017:07eoliphantah yeah, I didn’t pull the db in to the map#2018-07-1017:16rhansenNeed some help to formulate a query.
In my application, a character can have a set of skills. Those skills can be based of off other skills. And those skills can be based of off other skills again.
How do I write a recursive query which gives me all the skills of a character, but also all the skills those skills reference?#2018-07-1017:16rhansenIf that was confusing I can happily make a better attempt at explaining it.#2018-07-1017:19donaldballYou probably need to use rules: https://docs.datomic.com/on-prem/query.html#rules#2018-07-1017:25rhansenI might be missing something obvious here. But I don't know why this would help 😅#2018-07-1017:26rhansenoh#2018-07-1017:27rhansenI think I get it... opening repl#2018-07-1017:31rhansenNo. I didn't 😞#2018-07-1017:34rhansenI fail to see how rules can be used to form recursive queries 😕#2018-07-1017:45oscarYou set up two rules with the same name. One that is your base case, one that follows your "skills" chain and recursively calls the rule, again.#2018-07-1017:46rhansenahh, ok#2018-07-1017:47rhansenThanks for the heads up 😃#2018-07-1018:09Oleh K.Hi! What is the best way to fill a datomic database with test data?#2018-07-1105:48val_waeselynckTransact the application schema, then transact the test data? It's hard for me to see where what difficulties you're encountering without more context#2018-07-1018:24souenzzoHey, I'm still not on ions 😢
There is cons on run datomic on fargate?
Apart from formatting/html issues, is there a problem in this tutorial?
https://www.avisi.nl/blog/2018/02/13/how-to-run-the-datomic-transactor-on-amazon-ecs-fargate#2018-07-1107:19gerstreeOuch, that looks bad. We just moved to a new website/platform, will ping the devs to fix that.#2018-07-1107:20gerstree@U2J4FRT2T I can share large parts of our cloudformation template with you if you like.#2018-07-1020:24fingertoeTrying to follow the “First time upgrade instructions” https://docs.datomic.com/cloud/operation/upgrading.html
I don’t see the “Reuse existing storage on create” option to mark true in my AWS console.. Did they change it on us?#2018-07-1023:57oscar@fingertoe It's there. Are you sure that you copied the storage template?#2018-07-1103:29fingertoeThanks @oscar… I am making progress now..#2018-07-1104:19bmaddyDoes anyone know how to make datomic-free use more memory? I'd like to try -Xms4g -Xmx4g.#2018-07-1104:21bmaddyI can't find anything that says what the max memory amount is for the free version, so I'm not sure it's even possible...#2018-07-1104:44bmaddyNevermind, I found it in the transactor script: bin/transactor -Xms4g -Xmx4g ...#2018-07-1106:38euccastrowhat are the advantages of using entities with :db/ident, as opposed to keywords, for enumerations? is it only that you can assign other attributes to those entities?#2018-07-1106:40euccastrooh and that misspelling a keyword may go unnoticed for longer. anything else?#2018-07-1109:13rhansenI think that's about it. Since datomic isn't really a good fit for huge breaking changes to its schema, those advantages are really nice though.#2018-07-1109:46Andreas LiljeqvistProbably a performance advantage as well#2018-07-1109:47Andreas LiljeqvistDisadvantage is representation, :mykey vs 12312454123#2018-07-1115:07bmaddyI'm trying to rewrite some sql in datalog. Does anyone see what I'm doing wrong here?
;; SELECT sub_type, AVG(duration) AS "Average Duration"
;; FROM trips
;; GROUP BY sub_type;
(d/q '[:find [?st (avg ?d)]
:with ?st
:where
[?e :trip/sub-type ?st]
[?e :trip/duration ?d]]
(d/db conn))
I get ArrayIndexOutOfBoundsException [trace missing]#2018-07-1115:09chrisblomdon’t wrap ?st (avg ?d) in []?#2018-07-1115:09bmaddyYeah, that gives the same thing. 😕#2018-07-1115:12chrisblomdrop the :with part#2018-07-1115:16bmaddyThat gives a result, but the sub-types get coalesced
(d/q '[:find [?st (avg ?d)]
:where
[?e :trip/sub-type ?st]
[?e :trip/duration ?d]]
(d/db conn))
["Casual" 3283.31254089422]
Other sub-types do exist:
(d/q '[:find [?st (avg ?d)]
:where
[?e :trip/sub-type ?st]
[(= ?st "Registered")]
[?e :trip/duration ?d]]
(d/db conn))
["Registered" 1145.4663382594417]
#2018-07-1115:52chrisblomah ok, you only get the first result now because in :find you wrap ?st (avg ?d) with []#2018-07-1115:52chrisblomdoes it work if you remove the [...]?#2018-07-1115:55chrisblomSee https://docs.datomic.com/on-prem/query.html#find-specifications#2018-07-1115:57bmaddyHmm, I'm not seeing a ... to remove. Thanks a ton for taking a look at this, btw.#2018-07-1115:57chrisblomah, i meant your query should look like this:#2018-07-1115:57chrisblom(d/q '[:find ?st (avg ?d)
:where
[?e :trip/sub-type ?st]
[?e :trip/duration ?d]]
(d/db conn))#2018-07-1115:59bmaddyAh! So I only need :with if the relvar I'm grouping on isn't included in the :find clause I bet! That totally fixed it!#2018-07-1116:00bmaddyThanks a ton @chrisblom!#2018-07-1116:00chrisblomyeah, usage of :with is a bit tricky#2018-07-1116:00chrisblomthe error message does not help much#2018-07-1116:00bmaddyYeah. I tend to get bewildered by find-specifications also, so I think that contributed.#2018-07-1119:35jarethttp://blog.datomic.com/2018/07/datomic-ion-cast.html#2018-07-1122:56miridiusAwesome! Minor issue: in the metrics section (https://docs.datomic.com/cloud/ions/ions-monitoring.html#metrics) the list of required keys includes "type", but in the example code it uses "units" instead.#2018-07-1122:58miridiusalso the whole monitoring document is repeated twice (it starts over at https://docs.datomic.com/cloud/ions/ions-monitoring.html#sec-9)#2018-07-1213:32jaretThanks for catching that the merging of doc branches somehow duplicated the page. I’ve fixed it.#2018-07-1204:28euccastroI'm trying to make a ring app as an ion. I pushed and deployed an app that uses com.cemerick/friend (admittedly a bit of a stress test). I got the following error when curl -iing the gateway API endpoint:
HTTP/1.1 500 Internal Server Error
Date: Thu, 12 Jul 2018 04:22:05 GMT
Content-Type: application/json
Content-Length: 157
Connection: keep-alive
x-amzn-RequestId: 1f30cc2e-858b-11e8-ac8f-b9d295c18321
x-amz-apigw-id: J5aYVFrQliAFs7w=
X-Amzn-Trace-Id: Root=1-5b46d768-43f8e546292ea7321adbb5a0;Sampled=0
java.io.FileNotFoundException: Could not locate slingshot/slingshot__init.class or slingshot/slingshot.clj on classpath., compiling:(cemerick/friend.clj:1:1)
I have no such problem locally, and slingshot appears in the list of downloaded libraries that got printed when I first deployed:
... {:s3-zip "datomic/libs/mvn/slingshot/slingshot/0.10.2.zip", :local-dir "/home/es/.m2/repository/slingshot/slingshot/0.10.2", :local-zip "/home/es/.cognitect-s3-libs/.m2/repository/slingshot/slingshot/0.10.2.zip"} ... #2018-07-1204:29euccastrothis is my deps.edn, FWIW
{:paths ["src/clj" "resources"]
:deps {com.datomic/ion {:mvn/version "0.9.7"}
org.clojure/data.json {:mvn/version "0.2.6"}
org.clojure/clojure {:mvn/version "1.9.0"}
com.cemerick/friend {:mvn/version "0.2.3"}
ring/ring-defaults {:mvn/version "0.3.2"}}
:mvn/repos {"datomic-cloud" {:url ""}}
:aliases
{:dev {:extra-deps {com.datomic/client-cloud {:mvn/version "0.8.54"}
com.datomic/ion-dev {:mvn/version "0.9.160"}}}}}
#2018-07-1205:21henrikIs this section in the Datomic tutorial (https://docs.datomic.com/cloud/tutorial/assertion.html#sec-3) missing (d/transact conn {:tx-data (make-idents colors)})?#2018-07-1206:09euccastrotrying to use the session ring middleware with cookie storage seems to break the proxy integration:
Thu Jul 12 06:04:17 UTC 2018 : Endpoint response body before transformations: {"statusCode":200,"headers":{"Content-Type":"text\/plain","Set-Cookie":["ring-session=ECSI%2FAxqP4g3%2F6Lsf6j2gw6iTCd2jVL9CB2n8D%2BsBIY%3D--FweWg7tIHsIfkhtzoKxqC9YvJNtKEjzU%2BQtbF1Qzk20%3D;Path=\/;HttpOnly"]},"body":"T2zDoSAwIQ==","isBase64Encoded":true}
Thu Jul 12 06:04:17 UTC 2018 : Endpoint response headers: {X-Amz-Executed-Version=$LATEST, x-amzn-Remapped-Content-Length=0, Connection=keep-alive, x-amzn-RequestId=68d54df4-8599-11e8-a98d-17a42203bec1, Content-Length=254, Date=Thu, 12 Jul 2018 06:04:17 GMT, X-Amzn-Trace-Id=root=1-5b46ef61-1d46065e28b013d3ba616863;sampled=0, Content-Type=application/json}
Thu Jul 12 06:04:17 UTC 2018 : Execution failed due to configuration error: Malformed Lambda proxy response
Thu Jul 12 06:04:17 UTC 2018 : Method completed with status: 502
#2018-07-1206:47euccastroso just in case this bites someone: it seems like the AWS API gateway doesn't accept a list as a header value, and in general it doesn't accept multiple headers with the same name. a workaround, if you really need multiple headers with the same name, is to return the headers in different upper/lower case combinations (e.g., "Set-Cookie" and "sEt-cOOkiE" will work). you could write a ring middleware that does just that#2018-07-1210:01fmnoiseany thoughts about excision for datomic cloud - is it even planned to implement?#2018-07-1210:41stuarthallowayhi @U4BEW7F61. It is definitely on our radar. https://forum.datomic.com/t/support-for-excision-or-similar/323#2018-07-1211:37henrikI want to model a taxonomy, like this one:
Biology --> Medicine -> Internal
|-> Genetics
|-> Morphology
Eventually, I want to tag stuff with this taxonomy, such as an article entity tagged with genetics for example.
What would be a good way to model the taxonomy?#2018-07-1211:39henrikThis is my current attempt:
[{:db/ident :taxonomy/name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity
:db/doc "The title of a taxonomy node"}
{:db/ident :taxonomy/children
:db/valueType :db.type/ref
:db/isComponent true
:db/cardinality :db.cardinality/many
:db/doc "Children of a taxonomy node"}]#2018-07-1211:40henrikIt works. I’m just not sure if it’s an intelligent way to do it.#2018-07-1213:17chrisblom@henrik that looks reasonable to me#2018-07-1213:44val_waeselynck@henrik if the taxonomy graph is tree-like, :taxonomy/parent instead of :taxonomy/children is probably safer#2018-07-1213:44henrik@val_waeselynck Interesting! How is that safer?#2018-07-1213:45val_waeselynckWell, by having a cardinality-one attribute, you're being more explicit about the model ("a taxonomy has at most one parent")#2018-07-1213:46val_waeselynckAlso seems more reasonable to me that the parents, being more general, don't "know" about their children#2018-07-1213:53jonahbenton@henrik Relatedly, do you need to be able to navigate up the tree from child to parent? And is it possible for the taxonomy to be rich enough for there to be the same or similar names in different parts of the tree?#2018-07-1214:07henrik@jonahbenton Every node, regardless of level, should be entirely unique. Or, if it’s named the same, it is the same. And yes, navigation would have to be bidirectional. But as I understand Datomic, all references are bidirectional, right?#2018-07-1214:26val_waeselynckthey are, in the sense that you can easily navigate in both directions, whatever the query API you're using#2018-07-1214:30jonahbentonSo a given node may have multiple parents?#2018-07-1214:30henrikOh, I see. No, one parent I think.#2018-07-1214:32henrikThis is for categorising science into fields and subfields. Though now you got me thinking about whether modeling it as a network of subjects would be more powerful.#2018-07-1214:36jonahbentonYeah, probably would, though seems like it might depend on the size and the dataset feeding the categorization. Tags may be a useful modeling tool to capture commonalities (like computational-ness of the subfield) Perhaps also include a description attribute#2018-07-1214:42henrikRight now, I’m looking at basing it on a standard way of categorising (CWTS Leiden, about 250 categories and subcategories).
But just because that particular model is hierarchical doesn’t mean that there isn’t a more powerful way to do it.
The point with this particular taxonomy is to try to keep it small(-ish), using it to create rather large, but interconnected groups of material.#2018-07-1214:46henrikI could essentially model a freer graph in the same way, right? Renaming parent to something like relation.#2018-07-1214:47jonahbentonAh, that sounds neat. It sounds like datomic as a metadata store- this taxonomy applied to source material that lives outside datomic- which I'm thinking about for a project as well.#2018-07-1214:48jonahbentonYes, I believe so, I have seen some "node" "edge" terminology in schemas#2018-07-1214:48henrikYeah, the source material would come from scientific publishers, in the form of articles, journals, books etc. And we have to find a way (many ways, actually), to tie all that disparate information together into a cohesive, consumable collection.#2018-07-1214:48henrikWhat type of material will you be working with?#2018-07-1214:50henrikActually, with edges/nodes, I’m back to a list of relatives, though. Just not necessarily parents.#2018-07-1215:03jonahbentonThat sounds neat! Lots of interesting problems there. For me, as a side project, I'm looking at reimplementing a container artifact metadata api. The api is from a project called Grafeas: https://grafeas.io/ which acts as a metadata repository around container usage, vulnerabilities, deployment history, stuff of that nature. The basic technical idea is that grafeas is one of many projects in the container ecosystem that are glorified packagings of go code generated from protobufs. I like go, but when it comes to code generation, it's an awkward workflow, and the go people argue about checking code into the repo, doing it at build time, yadda yadda. It seems to me that in the clj space, you should have a pretty clean workflow of generating schema and data models from protobuf for the different layers -> spec, apis, datomic schema- and that should be sufficient to yield something of a working system. I don't see any of that tooling right now, so that's what I'm looking at.#2018-07-1215:19henrikCould you summarize the problem and the value proposition for me? I don’t think I’m familiar enough with the problem to fully understand the solution.#2018-07-1216:08jonahbentonKind of you to ask, it's niche, so the explanation is a little long:
Companies/orgs that run applications- api-type services and scheduled/batch jobs- have been "containerizing" their applications. Once you have containerized, there are a whole set of questions you'd like to be able to ask about your fleet, some operational, some security related, etc. Do any of the jvm applications I'm running use the vulnerable version of struts? If so, where are they in my network and for how long have they been running? How many of my applications have had vulnerabilities reported against their dependencies? What third party libraries are my service applications consuming, and are any of those licenses GPLV3?
In even a small plant you wind up wanting to have a metadata repository into which that sort of operational and security data can be pushed, and against which one can run queries. Beyond that, you want to be able to plug other consumers and providers into that repository. You want to be able to use vulnerability scanner X and build tool Y and signing tool Z, and Google has succeeded in getting commitments for adoption of this particular metadata API by various players in this ecosystem.#2018-07-1216:09jonahbentonI'm curious about this as a side project, as I do some work in security and have been enthralled with containers and kubernetes.
From a product standpoint, it seems like Datomic should be a good fit for this sort of metadata, both for storage and for query. Having a fundamentally immutable store that knows-when-you-knew-something is useful for security, and datalog is more capable than many other languages from a query perspective.
On a technical level, I'm curious about the ergonomics of going from protobuf->spec, protobuf->api, protobuf->datomic schema, and am curious about data-driven systems in general. There is a project called "vase" from the Cognitect folks which was an experiment in building a fully data-driven api + database. Write as little code as possible, describe the system entirely using data, how far can you go with that? So on a technical level I'm basically curious whether protobuf is a feasible "front end" with vase as a "back end".#2018-07-1222:39henrikThank you for the description, that does like an interesting (and hard) problem. I can see how managing tons of containers quickly takes on qualities of cat herding.
I remember the Vase introduction from a Cognicast way back. “Because it sits on top of Pedestal.”
In the more abstract, it’s interesting to try to imagine how to keep some of the ergonomics of Clojure once you pass the border of the application. Philosophically, a function and a container have sort of morphological similarities, but the environment is as different as that of a one-cell organism to that of an animal.#2018-07-1318:11jonahbentonAgree! Very interesting.
Working in clj on applications that will get deployed into k8s, one can't avoid engaging in thought experiments about a repl that directly creates and interacts with k8s resources in a first class manner. The repl and kubectl are equivalent levels of abstraction. One can imagine having a way to produce a pseudo clj namespace from a container image + a swagger spec, so loading that namespace under the hood spins up a container, and calling functions turns into (cross-language) service calls.
Certainly we've seen movies like this before; when abstractions are similar but not equivalent the pain is often greater than the benefit. But still interesting to think about.#2018-07-1214:42rhansenHmm... I have a list of references, and I want to check if those references all belong to a certain entity. What would be the best way to construct such a query?#2018-07-1214:46val_waeselynckwhat does it mean for a reference to belong to an entity?#2018-07-1214:47rhansen[?entity :person/friends ?some-ref]#2018-07-1214:47val_waeselynck@rhansen I would use a Datalog query to list or count those that don't#2018-07-1214:49rhansenInteresting. Thanks.#2018-07-1214:49euccastroI've done the ring wrapper I mentioned above. it's only tested in the REPL (and by deploying to ions, of course) so far, but I hope it's useful if you're tinkering with hosting a ring web app in ions: https://github.com/euccastro/expand-headers#2018-07-1214:58val_waeselynck@euccastro sorry, I don't follow what problem you are addressing?#2018-07-1214:59euccastro@val_waeselynck are you talking about my response to you or about the github repo I mention above?#2018-07-1215:00val_waeselynck@euccastro my response to you#2018-07-1215:01euccastrooh sorry I think I misunderstood your question to @rhansen#2018-07-1215:03euccastroI've deleted my responses since they only add noise#2018-07-1215:04val_waeselynckah ok 🙂#2018-07-1215:04val_waeselynckdebugging human conversations#2018-07-1217:30euccastroFWIW, the problem I mention here (https://clojurians.slack.com/archives/C03RZMDSH/p1531369681000042) persists if I manually add to my own deps.edn a dependency on the same slingshot version as cemerick.friend does (0.10.2), but for whatever reason it doesn't manifest if I upgrade the slingshot dependency to the current version, 0.12.2#2018-07-1217:33oscar@euccastro Upgrade to the newest Ions. It sounds like you have dependency conflicts. https://docs.datomic.com/cloud/ions/ions-reference.html#dependency-conflicts#2018-07-1219:09euccastrothanks @oscar!#2018-07-1219:10stuarthallowayHi @euccastro! If that does not work you should be able to spot an error in the logs, per https://docs.datomic.com/cloud/operation/monitoring.html#searching-cloudwatch-logs#2018-07-1219:15euccastrothanks @stuarthalloway! I just noticed I'd missed that whole "Operation" section of the docs 😛#2018-07-1303:42shoHi @euccastro, have you managed to create a ring app as an ion? Does it work just fine with your hack for the headers problem? I'm just trying to do the same exercise and curious what to expect.#2018-07-1304:48euccastroso far it works fine. as you may have seen in the #datomic channel, I have stumbled into some dependency problems too, but so far I'm managing by paying attention the first time I push a version that introduces a dependency and manually declaring any conflicting dependencies#2018-07-1304:49euccastrosee this (not ions specific) for how to associate a domain name to your API Gateway app: https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html#2018-07-1304:53euccastroalso, if you want to be able to serve the root (/) directory, you need an additional ANY method in the root (/) resource of your API Gateway. the ions tutorial doesn't get into that. you shouldn't remove the /{proxy+} resource, though. AFAICT both are needed#2018-07-1304:55euccastroall that said, I haven't tested much functionality yet, only that basic ring handlers work#2018-07-1304:55euccastrogoogle "keep aws lambda warm" for another important consideration if your app is user-facing or otherwise latency sensitive#2018-07-1304:57euccastrothe good thing about these hoops is that you only need to jump through them once I think. I haven't touched my API Gateway configuration at all since I initially set it up, and I don't expect to have to worry much about it#2018-07-1305:01euccastrohttps://datomique.icbink.org where I'm testing these things. that is backed by ions (solo deployment). the counter (refresh the page) is kept in the cookies, and the list of accessed paths is kept in a local atom (note that any process-local state gets lost on deployments though)#2018-07-1305:03euccastrothis is my ring handler ATM FWIW:
(def log (atom []))
(defn ring-handler
[{:keys [headers body uri params session]}]
(if (= uri "/favicon.ico")
{:status 404
:body "Not found!"}
(do
(swap! log conj uri)
(let [count (get session :counter 0)]
{:status 200
:headers {"Content-Type" "text/plain"
"p-ro-va-heaDers" ["a" "b" "c" "d" "e"]}
:body (str "Olá " count "-" (pr-str @log) "!")
:session (assoc session :counter (inc count))}))))
(defn dup [xs]
(conj xs (first xs)))
(defn wrap-add-cookie [handler]
(fn [req]
(update-in (handler req) [:headers "Set-Cookie"] dup)))
(def ring-app
(-> ring-handler
(wrap-session {:store (cookie-store {:key "a 16-byte secret"})})
wrap-keyword-params
wrap-params
wrap-add-cookie
wrap-expand-headers))
#2018-07-1305:04euccastroas you see I've been mostly tinkering with multiple header values and ring handlers, not doing anything fancy yet#2018-07-1305:08euccastroI'm pushing my experiments here if you're interested (ignore the /old folder): https://github.com/euccastro/semente#2018-07-1306:36shoSorry I've been offline for lunch. All of your information is very helpful, especially because I haven't found anyone else doing the same stuff yet.#2018-07-1306:50shoI'm still not 100% convinced whether the approach of building a ring handler behind API Gateway is the best decision for me, but the alternative would be doing auth with AWS Cognito, which means throwing away a good chunk of Clojure code and moving away from the Clojure ecosystem.#2018-07-1306:58shoSo I want to first try my server-side code with Buddy auth as a ring ion.#2018-07-1307:08shoAbout java cold start, I'm thinking about dispatching an event to knock the ion app right at the moment users visit my static site on CloudFront and having one ion handle all of my api requests that requires both authentication and authorization. Not sure if this is a good strategy, but I plan to try it and examine the latency problem with my eyes.#2018-07-1307:14shoI'll be out for a few days, but if I happen to find anything valuable, I'll ping you and share the info. Cheers.#2018-07-1321:43euccastrothanks!#2018-07-1219:15euccastro(btw it did work)#2018-07-1221:09eggsyntaxIs anyone aware of any writing or documentation out there about guarding against malicious datomic queries, especially preventing queries with too great a performance impact? I don't think it makes sense to naively expose queries entirely to the public (or semi-public in my case, ie logged-in users, with only signups vetted). But I'm interested in seeing what's been written on the subject. Didn't find anything relevant on a quick review of the datomic docs.#2018-07-1621:05timgilbertI've thought about this a lot, but never found much in the way of writing on the subject. In general the problems are similar to problems that other graph databases also face. But there's not tons of general literature available for those either.#2018-07-1621:07timgilbertAt my company we did go through an exercise of parsing pull queries and then limiting specific queries to a certain depth and doing other validations on them#2018-07-1621:08eggsyntaxThanks, Tim! Any particular tips/gotchas on that process?#2018-07-1621:08timgilbertBut we eventually moved to keeping all the queries on the server where we could control them, and then moving to a GraphQL interface which has its own set of issues#2018-07-1621:09eggsyntaxHeh. We've been doing some exploration on a new project, and I had put off making DB decisions. I added GraphQL so I could support client-side "pull"-specification. Now that I've decided to go with datomic, I'm dropping GQL like a hot potato 😉#2018-07-1621:10timgilbertOne thing that we ran into a bunch was trying to figure out how to guard against attacks where a user is able to escape her own company and start getting data about another person's company by backref-linking through a shared entity#2018-07-1621:10eggsyntaxIt doesn't seem like GQL really provides any inherent support for limiting query specification impact either, seems like you're left facing the same problem.#2018-07-1621:10eggsyntaxBut not the backref aspect I guess, huh?#2018-07-1621:11timgilbertIf you decide to expose some of your datomic stuff via lacinia, we open-sourced a library that does some of the grunt work for you: https://github.com/workframers/stillsuit#2018-07-1621:12eggsyntaxHmm, seems like one option (for datomic) would be to parse the pull and look for backrefs, and then just reject any calls that had them.#2018-07-1621:12timgilbertYeah, except in cases where you actually need them, like you have a project and are looking for all users with :person/project ?p#2018-07-1621:13timgilbertAnyhow, we thought about it for a while and eventually decided keeping the queries on the front-end was going to be a black hole of engineering time#2018-07-1621:14timgilbertI think there are ways you could work around it, like have a "dev mode" where the client sends them over and a "prod mode" where they are replaced by keywords or something#2018-07-1621:17eggsyntaxYeah, I can definitely see the possibility of it becoming a terrible timesuck. The keyword approach seemed promising to me too.
This is a bunch of really useful info for me. May save me from going down some wrong roads. I really appreciate it :man-bowing:#2018-07-1621:18timgilbertWe were also thinking about moving to a multi-tenant setup where user data from different orgs was stored in entirely separate databases, which would have been easier to do on day 1 than day 638 or whatever#2018-07-1621:18eggsyntaxAh, yeah, no doubt.#2018-07-1621:18timgilbertNo prob. I'd say definitely give it some thought, you might stumble on something we didn't, and I'll look forward to reading your blog post about it 😉#2018-07-1621:21eggsyntaxSeems like maybe writing your schema explicitly to avoid the need for backrefs in client requests might work, although I'm not at all sure of that.
Or maybe you could take an approach like disallowing certain things like backrefs, but being able to pass keywords that tell the server to include datomic rules that provide just the backrefs that you need.#2018-07-1621:22eggsyntaxie hide the potentially dangerous stuff behind keywords and disallow it in client requests, but then expose the full range of non-disallowed stuff for the client, for the sake of power.#2018-07-1621:32timgilbertIt's possible, yeah. Starting with a subset seems like a promising approach, or maybe a query DSL that you could validate and then translate back into pull syntax on the server side#2018-07-1221:12eggsyntax(this is re: on-prem / peer, btw)#2018-07-1222:01jonahbentonFor reads, in terms of constraining cpu/ram resource utilization- in the peer architecture, the query processing is happening wholly in your app, so this is under your control. You can give inbound requests as much or as little time as you want on a thread, then cancel; or retrieve only a limited number of results, or whatever...#2018-07-1222:02eggsyntaxFor sure! I'm just wondering if there are some examples of approaches that people have taken to that, that may bring up datomic-specific considerations that I have thought of.#2018-07-1303:09eoliphanti’m seeing this error for a largish (~500 datoms) transaction
"java.lang.IllegalArgumentException: No implementation of method: :value-size of protocol: #'datomic.cloud.tx-limits/ValueSize found for class: java.lang.Integer\n\t
Any ideas what this might be?#2018-07-1311:15stuarthalloway@U380J7PAQ somehow a wrapped Integer showed up where it should not, probably should be a primitive long. Let me know if you can make a small repro. I doubt this has to do with tx size.#2018-07-1321:53eoliphanthmm, will do some digging. I’m using transit to sneak edn in and out of my ions. ran into the customary issues/surprises on my cljs client, might be something similar on the server#2018-07-1518:36eoliphantOk so.. um.. that was in fact the problem… but it was weirdly intermittent lol. I’m uploading some info off of a gene sequencer. I’d totally forgotten that my parser on the server was in fact calling Integer/parseInt to set the value of the associated datoms But, it frequently worked just fine. Changing to Long/parseLong did fix it.
Gonna finish this stuff up. Then try to go back and see if i can get a consistent test case#2018-07-1320:49souenzzoHello
I'm on "classic peer"
I have a datomic function :empty-query? that is pretty simple
(def empty-query?
(d/function '{:lang :clojure
:requires [[datomic.api :as d]]
:params [db query & args]
:code (when-not (->> (into [db] args)
(hash-map :query query :args)
(d/query)
(empty?))
(throw (ex-info "FAIL" {})))}))
But some queries produce different result's on peer and on transactor
For example
'[:find ?e
:in $ ?ignore-set
:where
[?e :app/foo]
(not [(contains? ?ignore-set ?e)])]
on peer(d/with and d/transact on "mem") works "as expected"
on transact(d/transact on "dev") always return "empty?"
Then I changed to
'[:find ?e
:in $ ?ignore-set
:where
[?e :app/foo]
[(contains? ?ignore-set ?e) ?q]
[(ground false) ?q]]
That second one always returns the same results ("as expected") on transactor and on peer.
Is it a bug?#2018-07-1612:55souenzzoBUMP.
It's causing me concurrence problems and there is no simple way to test if the query will work on transactor or not #2018-07-1704:03souenzzohttps://forum.datomic.com/t/inconsistency-between-query-on-peer-and-transact/548#2018-07-1723:20souenzzoAny hope on this?#2018-07-1911:42stuarthalloway@U2J4FRT2T I have reproduced this but have not fully isolated it yet. The workaround with ground seems sound.#2018-07-1912:40souenzzoThe worse part is that I can't test if my query will work or not. The unique way to test it is testing against dev/free transactor, and it's way slower.#2018-07-1915:22souenzzoWill be a issue to fix that? @U072WS7PE#2018-07-2105:25souenzzo@U072WS7PE news about that? it's a but and will be fixed? I will need to always run all my tests in datomic:free? I need to make a repo to reproduce?#2018-07-2202:14souenzzo@U072WS7PE repo to reproduce the bug
https://gist.github.com/souenzzo/c7b5a5434d4c04efcc58802c81b46023#2018-07-1415:16Björn EbbinghausIs it wise to store sequential data in datomic? Like a log.
I need to store sequences of sequences of events. Like this:
{:user1 [[:a :b] [:a :b :c]]
:user2 [[:b :c]]}
#2018-07-1713:13stuarthallowayhi @U4VT24ZM3! There is some discussion of patterns for sequential data at https://forum.datomic.com/t/handling-ordered-lists/305#2018-07-1416:18miridiusMy clj tool can't seem to download the com.datomic/ion jar from S3. Even if I directly clone the ion-starter project and then try to run clj, it gets a 403 from Amazon S3:
$ git clone && cd ion-starter
$ clj
Error building classpath. Failed to read artifact descriptor for com.datomic:ion:jar:0.9.16
org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read artifact descriptor for com.datomic:ion:jar:0.9.16
<snipped>
Caused by: org.eclipse.aether.resolution.ArtifactResolutionException: Could not transfer artifact com.datomic:ion:pom:0.9.16 from/to datomic-cloud (): Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 6F26D77435731E93; S3 Extended Request ID: bcJFpRXI081lRtaNVQeMMyrTWhU+wbqWfwOk/YjCD+m5t0mfCwHFWcGdqVYAbMK75k5S4Ei9Y4M=)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:422)
...#2018-07-1416:45miridiusok looks like it was an AWS permission issue. My user was in the datomic-admin-<system name> group but evidently that's not enough, I gave it the AdministratorAccess policy and now it works :+1:. Figuring out exactly which permission was missing is an exercise for later, I guess 😁#2018-07-1417:00miridiusis it possible to deploy multiple ions applications to the same datomic cloud system? I suppose they would at least have to have the same name, since you can't do a push if the ions application name doesn't match the system's application name#2018-07-1418:10chris_johnson@miridius in my experience you can, though you might need to have one top-level ion-config.edn that knows about all the applications#2018-07-1418:49eoliphanthow does that work exactly? I think i tried, setting :app-name to something arbitrary and it didn’t seem to like that#2018-07-1502:05miridiusIf I try to download the bundle to my own machine using aws s3 cp then it works fine#2018-07-1506:29henrikIs it possible that Datomic Cloud will be available on Google Cloud eventually?#2018-07-1600:45eoliphantin @stuarthalloway’s longer talk on ions he sort of alluded to it as a possibility if there’s sufficient demand, etc etc. As it stands, it’s very “AWS’ey”#2018-07-1616:29henrikRight! Well it’s not a HUGE problem. I intend to make use of some very Googly services. They’re not realtime though, so calling them from AWS is not insurmountable. Nevertheless, it would be nice to be a bit more consolidated.#2018-07-1511:18eraad@stuarthalloway Hi! There is a 404 error in https://www.datomic.com/details.html. The “Learn more” link in Hierarchical.#2018-07-1516:13jaret@eraad I’ll have to fix that on Monday. But it should link here: https://docs.datomic.com/cloud/schema/schema-modeling.html#2018-07-1516:13jaretThanks for the report!#2018-07-1603:55eoliphantso, about that :app-name in ion-config.edn lol. As far as I can tell so far, that must be the same as your datomic cloud system name? Curious because have been cranking away, and have enough ion code, that I’d probably like to break it out into separate projects, that would still be installed in the same system/instance/whatever. Is that possible at this point?#2018-07-1701:18stuarthalloway@U380J7PAQ you can set the application name when you create a system, see https://docs.datomic.com/cloud/ions/ions-reference.html#ion-config. If you have N library projects and 1 app project, the app project should have the ion-config.edn file.#2018-07-1701:21eoliphantok, but basically there can be only one app project per ‘system’? Where a system is a deployed instance of Datomic Cloud?#2018-07-1701:22eoliphantI was thinking (hoping 😉 ) that I could install multiple apps, not just libs that a single app uses. I may be looking at this incorrectly. I’m moving what used to be 4,5 clojure/datomic microservices into ions. I was thinking they’d each be an ‘app’ in a given ’sytem.s#2018-07-1711:27stuarthalloway@U380J7PAQ an Ion app is 1-1 with an AWS CodeDeployment app, which is the unit of deployment to a compute group (not a system). When we release query groups (see https://docs.datomic.com/cloud/whatis/architecture.html#system) you will be able to deploy a different app to each query group in a system, if you want.#2018-07-1712:16eoliphantok, i think i got it now. So (granted, I know this is new even for you guys lol), in your estimation would query groups be the logical longer term unit of demarcation, for more or less independent chunks of functionality? I’m working through how this would scale in terms of dev teams. I’m about to turn a scrum loose on this but will have others coming online in the next quarter or so.#2018-07-1712:17eoliphantand on that note, lol, any ETA on query groups?#2018-07-1712:26stuarthallowayworking on it 🙂 Because it is a CloudFormation change requires more coordination with AWS#2018-07-1712:27stuarthallowayI am recommending "popup solo system per dev who needs isolation", query groups will provide another axis here.#2018-07-1712:28stuarthallowayvery interested in your feedback on mapping the tech to a team workflow, and already working on improvements in this area as well#2018-07-1721:08eoliphantyeah, i’d already adopted the solo per dev approach. which makes things pretty awesome. self contained env. The query groups stuff is more about app segmentation. while all the code is all hanging out in a compute/query group or whatever. I’d stll like to have the ‘microservice feel’ lol. Where my ‘apps’ have their own db’s etc etc. It’s a little brain twisty. But yeah will definitely keep you apprised. Fortunately the first app is small enough for this to work.#2018-07-1604:46chris_johnsonOkay, so this has taken far longer than I wanted to get into a state I am okay sharing, but here is an early draft: https://github.com/hlprmnky/ion-appsync-example#2018-07-1604:47chris_johnsonFull-stack GraphQL example backed by the ion-starter code and data set. Thanks to @steveb8n for his work on the Cognito-aware SPA client.#2018-07-1604:47chris_johnsonI’m just about to crash and then go catch a plane for a short vacation, but I will try to remember to post this to the Datomic forum tomorrow as well. Cheers!#2018-07-1618:06henrikThat’s great, thanks for writing this up!#2018-07-1605:14steveb8nNice to see it come alive @chris_johnson#2018-07-1612:19eoliphant
Nice. Im working on some similar stuff with amplify in a cljs client, and a ring ion entry point via api gateway. And a poc with lacinia, umlaut, etc to see if we can build a '"better appsync"#2018-07-1612:20eoliphantHey is it or will it, be possible to deploy into an existing VPC?#2018-07-1713:11stuarthalloway@U380J7PAQ no current plans for "BYO VPC" -- it is an implementation and support hairball#2018-07-1713:14eoliphantUnderstood, I can imagine, lol. We’re doing some significant rengineering of our VPC’s one per lifecycle stage, then dedicated ones for transit, logs/secvault, etc. will dig on on how to best integrate datomic clouds’ config#2018-07-1617:20firstclassfuncHey guys, Is there a better place to ask Datomic-ION setup questions?#2018-07-1618:54jaret@firstclassfunc here or on the forums. https://forum.datomic.com/#2018-07-1620:48Joe LaneAnyone ever seen an error like this before? "No implementation of method: :value-size of protocol: #'datomic.cloud.tx-limits/ValueSize found for class: java.lang.Integer"#2018-07-1620:48Joe LaneI’m trying to transact some data, which transacts locally, but not from apigw#2018-07-1621:47Joe LaneFigured it out. Datomic Cloud doesn’t (currently) seem to store integers (nor convert them automatically to longs). I had a field that was using ->int from semantic-csv to convert from string to java.lang.Integer.#2018-07-1621:47Joe LaneInstead Datomic Cloud stores longs. Once converting the int to a long it worked great. Hope this help someone in the future.#2018-07-1622:52eoliphantYep, ran into that myself a few days ago. Weirder thing, was that it it seemed to be intermittent. I had some code that would call parseInt on a string, transact it in, it would fail in some cases but not others#2018-07-1712:48RodinHi, I'm trying to load about 0.5GB of data into datomic. Can anyone confirm that transact is not lazy, i.e. when passing it an ISeq that sequence will be reified into a list/and or the head of that sequence will be held onto?#2018-07-1712:50jaret@rodinhart are you trying to transact that amount of data as a single transaction?#2018-07-1712:53RodinWell, I'd like to. The follow-up question, as expected, would be: how do I batch data that has references to earlier entities?#2018-07-1712:56jaretThat’s almost certainly far too much data for a single transaction. To batch you’ll want to build up batches with some kind of identifier on your entities. Like using lookup refs or unique identities that you create. The tempid’s map that is returned from transact can be used to map entities.#2018-07-1712:56RodinAre you confirming transact isn't lazy?#2018-07-1712:57RodinAnd are you saying if I give entities a temp id for :db/id, the return value of transact will give me a mapping from those temp ids to the actual ids in the db?#2018-07-1712:57marshallyes ^#2018-07-1713:00marshall@rodinhart https://docs.datomic.com/cloud/transactions/transaction-processing.html#tempid-resolution#2018-07-1713:01RodinAh, brilliant, very helpful.#2018-07-1717:55Joe LaneAnybody know why after removing the backslashes for the ion deploy step from the :group tag and other tags the backslashes still occur in :rev and :uname?#2018-07-1719:29stuarthallowayhi @U0CJ19XAM you should not need backslashes anywhere#2018-07-1719:47Joe Laneclojure -A:dev -m datomic.ion.dev '{:op :push :uname "ch357"}'
Downloading: com/datomic/java-io/0.1.11/java-io-0.1.11.pom from
(cognitect.s3-libs.s3/upload "datomic-code-f070a20d-8cb2-44f6-b83a-a47dd69ed035" [{:local-zip "target/datomic/apps/someapp/unrepro/ch357.zip", :s3-zip "datomic/apps/someapp/unrepro/ch357.zip"}] {:op :push, :uname "ch357"})
{:uname "ch357",
:deploy-groups (someapp-compute),
:dependency-conflicts
{:deps #:com.cognitect{http-client #:mvn{:version "0.1.80"}},
:doc
"The :push operation overrode these dependencies to match versions already running in Datomic Cloud. To test locally, add these explicit deps to your deps.edn."},
:deploy-command
"clojure -Adev -m datomic.ion.dev '{:op :deploy, :group someapp-compute, :uname \"ch357\"}'",
:doc "To deploy to someapp-compute, issue the :deploy-command"}
I have to pull backslashes off of the output of the above command still, I realize you all removed the backslashes from the other commands (thank you!), but these still seem to remain.
clojure -Adev -m datomic.ion.dev '{:op :deploy, :group someapp-compute, :uname "ch357"}'
#2018-07-1720:01Joe Lane@U072WS7PE Just Tried it again from Vanilla Bash, same issue, now with :rev instead of :uname
bash-3.2$ clojure -A:dev -m datomic.ion.dev '{:op :push}'
Downloading: com/datomic/java-io/0.1.11/java-io-0.1.11.pom from
(cognitect.s3-libs.s3/upload "datomic-code-f070a20d-8cb2-44f6-b83a-a47dd69ed035" [{:local-zip "target/datomic/apps/someapp/stable/a510fee0af59e67a9ba99cfff20c935b7b02d517.zip", :s3-zip "datomic/apps/someapp/stable/a510fee0af59e67a9ba99cfff20c935b7b02d517.zip"}] {:op :push})
{:rev "a510fee0af59e67a9ba99cfff20c935b7b02d517",
:deploy-groups (someapp-compute),
:dependency-conflicts
{:deps #:com.cognitect{http-client #:mvn{:version "0.1.80"}},
:doc
"The :push operation overrode these dependencies to match versions already running in Datomic Cloud. To test locally, add these explicit deps to your deps.edn."},
:deploy-command
"clojure -Adev -m datomic.ion.dev '{:op :deploy, :group someapp-compute, :rev \"a510fee0af59e67a9ba99cfff20c935b7b02d517\"}'",
:doc "To deploy to someapp-compute, issue the :deploy-command"}
#2018-07-1800:51sho@U072WS7PE I have the same issue with @U0CJ19XAM. Even after the latest update, I still get unnecessary backslashes whenever I "push". For the other ops, this does not occur.#2018-07-1717:56Joe LaneA better question may be, is there a way I can just invoke this library from the repl so I never have to go back to my terminal and mess with the deployment step in a different window?#2018-07-1801:22fdserrOn Prem DB/TX functions: any trick to deploy changes without a Transactor restart (classpath reload)? I’m using ˋd/function` with a single :require, no closures or multimeths. Many thanks.#2018-07-1813:21marshallis the namespace you’re requiring already on the classpath?
If so, you should be able to install the transaction function and then use it without a restart#2018-07-1902:36fdserrIndeed, with proper env set.#2018-07-1902:38fdserrI can use the fns, but I’d be keen to be able to hot deploy updates. Thanks.#2018-07-1912:51marshallyou can definitely install and use txn functions on a running transactor without a restart#2018-07-2000:21fdserrSo you mean a running Transactor will grab changes in its classpath without a restart? I’m gonna give it another try... anything specific to be aware of? (option, env...)#2018-07-2017:00marshalltransaction functions are not classpath functions#2018-07-2017:01marshallyou can use a classpath function as a txn function#2018-07-2017:01marshallbut you can also install a “regular old” transaction function (not as a classpath fn)#2018-07-1807:28ignorabilisCloud queries: any way to use :limit to get the newest instead of the oldest values?#2018-07-1808:24oscarDatomic doesn't guarantee a return order. Your best bet is to sort the entire set on some criteria (like :db/txInstant) in an ion. It could be a query-fn or a lambda.#2018-07-1815:17ignorabilis@U0LSQU69Z - thanks a lot!#2018-07-1807:36ignorabilisAnd again on cloud - (pull ?eid [(:event/inputs :default false)]) does not return false when there are no records, whereas (pull ?eid [(default :event/inputs false)]) works properly; the first one is in the docs as an example; am I doing something wrong?#2018-07-1813:25marshallwhere in the docs is that example? I believe that might be a syntax issue#2018-07-1813:34marshall=> (d/pull (d/db conn) '[:artist/name (:artist/endYear :default "N/A")] paul-eid)
#:artist{:name "Paul McCartney", :endYear "N/A"}
Seems OK here.
What version of Datomic and client?#2018-07-1813:36marshalljust noticed you didnt quote your pull expr#2018-07-1813:36marshalloh. interesting. it may not work with false#2018-07-1813:39marshalllooking into it#2018-07-1815:16ignorabilis0.8.56 is the client; I'm not sure about the version of Datomic, but we updated somewhere after the release of ions#2018-07-1815:16ignorabilisthanks 🙂#2018-07-1807:48kirill.salykinHi all, for datomic in prem how one filter based on LocalDate? Because seems like datomic uses old java Date.#2018-07-1811:10eoliphantThat's correct. Datomic uses clojure #insts, which are java Dates. Just convert as necessary #2018-07-1811:15kirill.salykinMakes sense, thanks. Would be nice to have all new java date time things tho#2018-07-1815:23ignorabilisDatomic Ions - Is there a way to ensure that data is sorted by time when being transacted? I.e. instead of sorting it by :db/txInstant over and over again for each query we want to have the default functionality of classic SQL databases, where everything is sorted by time by default#2018-07-1815:24Joe LaneWhy do you want that? Are you sure you need it?#2018-07-1815:24Joe LaneAlso, when you say sorted by time when being transacted do you mean being queried?#2018-07-1815:25ignorabilisWe have some values that get added over time; we want to get from the db only the latest N values#2018-07-1815:27val_waeselynck@ivan.yaroslavov btw, I recommend you don't rely on :db/txInstant and use a custom attribute for that: https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2018-07-1815:29val_waeselynckwhat's more, with a custom attribute, you will be able to use the indexes for that attribute to your advantage, either using comparisons clauses in Datalog or seek-datoms#2018-07-1815:36ignorabilis@val_waeselynck - that is ok, the main concern is that we don't want to constantly be sorting hundreds of values; we have an entity that contains a component entity with cardinality/many; we just want to get the latest N values in an efficient way#2018-07-1815:37ignorabilisSo in an ideal world part of the query would be (:my/entity :limit 50 :ascending false); of course :ascending is pseudo code#2018-07-1815:38val_waeselynck@ivan.yaroslavov if the target of the to-many are entities in the same partitions, seek-datoms should sort them in ascending order; you could use some dichotomy algorithm to get the latest#2018-07-1815:40ignorabilisbut we want :ascending false; also could you please elaborate on dichotomy algorithm?#2018-07-1815:42val_waeselynckit's easier to explain with dates; Datomic's index API only give you datoms in ascending order. So if you want the first 50 it's easy, but the latest 50 it's harder. However, you can query for the datoms starting from an exponentially decreasing lower bound date until you get to 50.#2018-07-1815:42val_waeselynckE.g give me the datoms from 1 day ago to now; then give me the datoms from 1 day ago to 2 days ago, then from 4 days ago to 2 days ago, etc.#2018-07-1815:43val_waeselynckuntil you get to 50#2018-07-1815:44favilaor maybe another attribute for indexing#2018-07-1815:44val_waeselynckbut you know, if we're just talking about hundreds of ref-typed values, you might as well realize them all in memory, since they will probably be in the same segment anyway#2018-07-1815:44favilayou could separately store an indexed long which is the date in milliseconds negated#2018-07-1815:45favilathat would give you a cheaper "newest stuff" index#2018-07-1815:46val_waeselynckthe thing is, you also have to restrict the search to the owning entity#2018-07-1815:46favilad/index-seek before (or at the top of) a query#2018-07-1815:47val_waeselynckor a compound index#2018-07-1815:50ignorabilisok, thanks, we'll try the attribute for indexing#2018-07-1822:11johnjUsing the free transactor, a simple write takes ~15ms on average, is this normal? (in a single machine)#2018-07-1822:19eraadHi! One of my co-workers is thinking about setting up a “long running” Datomic Ion as a Kafka client to process real time events. I see there are a lot of loose ends (how to start it, monitor it, stop it, etc.). Any feedback?#2018-07-1822:24eraadIt would be cool if Datomic had an Ion configuration option (similar to Lambda and API GTW) called Kafka, so Datomic can manage the long-running process for you.#2018-07-1911:52henrikIs it a good idea to create something like :internet/email, and inject it everywhere for people, companies, what have you, rather than :person/email, :company/email and so on?#2018-07-1911:54chrisblomi prefer specific attributes over generic attributes#2018-07-1911:54henrikWhy?#2018-07-1912:08chrisblomdifferent entities may have different requirements, and it makes it easier to do queries like “give me all the email adresses of users”#2018-07-1912:09chrisblomfor example, for users you may want to use email’s as id’s, but for companies not#2018-07-1912:09chrisblomor: a user can have only one email, but a company can have more#2018-07-1912:14chrisblomanother issue is that an email address might be used as the :internet/email of both a company and a user. If want you use the this email as an id, you will run into trouble#2018-07-1912:21henrikRight, I see your point.#2018-07-1912:25chrisblomthere are valid use cases for generic attributes of course#2018-07-1912:41henrikI’ve got to think through where to draw the line. Theoretically, you could say that names are unique as well. This person-entity refers to the name-fact “Jane Smith.” As does a bunch of other person-entities.
The utility would be minimal though.#2018-07-1912:02dominicmhttps://docs.datomic.com/cloud/whatis/data-model.html#sec-5
> For example, an application that wants to model a person as an entity does not have to decide up front whether the person is an employee or a customer. It can associate a combination of attributes describing customers and attributes describing employees with the same entity. An application can determine whether an entity represents a particular abstraction, customer or employee, simply by looking for the presence of the appropriate attributes.
I feel like Datomic is encouraging :internet/email here.#2018-07-1912:05henrikI figure URLs and emails should be candidates for uniqueness. You could then theoretically pull out every entity where that URL or email appears.#2018-07-1912:16chrisblomwhat if a company and user share the same email?#2018-07-1912:17henrikThey’ll point to the same datom I suppose. That’s kind of what I contemplate might be a feature.#2018-07-1912:18chrisblomi can be if that is what you want#2018-07-1912:20chrisblomi’d say they point to the same entity, not datom#2018-07-1912:20chrisbloma datom is a single [entity attribute value …] fact#2018-07-1912:23henrikThis is what I’m trying to wrap my head around at the moment. In words, I’d like to express it as “There is such a thing as an email that is <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>.” And as a separate fact, “Person X has declared that their email is [email of <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>]”#2018-07-1912:24henrikAlthough, perhaps this is chopping up the conceptual world too finely.#2018-07-1912:41chrisblomIt is an option#2018-07-1912:41chrisblom{:db/id 123
:internet/email "#2018-07-1912:44chrisblomThen later you could add:#2018-07-1912:44chrisblom{:db/id 789
:company/id "Acme Corp."
:company/email 123}
#2018-07-1912:44henrikIn the domain I’m looking at, email adresses may show up on people, organisations, book reviews, journal articles, etc. etc., and this may be a way to tie them together, given that both URLs and emails have UUID-like properties.
If the same email appears on two of these entities, there’s likely a relation, barring typing errors.#2018-07-1912:47chrisblomyes that seems reasonable to me#2018-07-1912:47henrikBut then I have to face the fact that there may eventually be overhead as well. Such as “<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>”, which was then corrected to “<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>”. Now there’s a lonely email, unconnected to anything, floating around.#2018-07-1912:47henrikNow I’ve got to write a vacuum cleaner which goes around and retracts pointless emails.#2018-07-1912:48henrikOr a thing that checks if this was the last reference to the email. If so, retract it.#2018-07-1912:49henrikIt’s sounding a lot like garbage collection at the moment.#2018-07-1912:49chrisblomyes, but it seems doable#2018-07-1912:50chrisblomyou could use transaction function to rename emails, that retracts email entities once they are no longer used#2018-07-1912:51stuarthalloway@henrik I don't think I would bother removing such things without a tested performance requirement showing that it matters.#2018-07-1912:51henrikRight. So, let ’em float.#2018-07-1912:52stuarthallowayAnd you always have enough info to change your mind later, because Datomic.#2018-07-1912:52henrikTrue. I’ll give it a shot and see what happens. Thanks to both of you.#2018-07-1912:53stuarthallowayI am lazier than @U9CQNBXDX 🙂 -- if I did have such a batch cleanup job I would write it as a 5-line script, not a tx fn.#2018-07-1912:58chrisblomi forgot to mention that it would be a 4-line tx fn#2018-07-1914:53henrikStill not as lazy as doing nothing at all, so that wins 😄#2018-07-1912:22chrisblomi think its better to model separate domain types as separate entities: a person can have some relation to a company, but a person is not a company#2018-07-1914:08eoliphantyeah, @henrik @chrisblom I’ve been back and forth this as well. For me at least part of the problem has been falling into being ‘unnecessarily relational’ when modeling. One technique I’ve come across that’s a little less common, but pretty powerful (kind of like Datomic lol) is Object-Role Modeling. It has some formalisms around verbalizing models and what have you. There are some ORM tools that do all this crazy transformation to map it into relational models. But you can do a pretty much 1-to-1 mapping of what you come up with in datomic#2018-07-2001:39chrisbroomeIs there any automated way to get datomic pro starter running locally on a laptop? I haven't found any way to use it that doesn't require manually editing configuration files.#2018-07-2004:34eoliphantnot that I know of @chrisbroome, but if you’re just running in dev mode, I think it’s just copying that sample template up and pasting in your license key#2018-07-2008:36dominicm@chrisbroome we usually add a dependency on it locally, and then use the in-memory mode. That is, we don't use an external transactor.#2018-07-2013:11Petrus TheronIs it possible to run Datomic client API on Heroku with Postgres as a backing store with an eager indexing scheme/transactor without running a Datomic Peer?
Datomic On-Prem requires Heroku Enteprise to run, and my hobby side-project doesn't justify the cost of migrating to Datomic Cloud yet.
*Edit: I see there is now a $1/day solo deployment. Maybe that's what I need. Does that work with Datomic Ions?#2018-07-2013:19Petrus TheronHm, I don't understand the AWS Solo deployment pricing. When I continue to the AWS Marketplace subscription page, I see that I will also be billed hourly for t2.small and i3.large instances, not just $1/day. Will these costs be discounted for solo deployment, or am I doing something wrong?#2018-07-2016:31marshallThe all-in price for solo is around $1 a day, depending a bit on your free tier usage, etc#2018-07-2016:31marshallYes, ions absolutely work with solo#2018-07-2020:35henrikI’ve been running the solo since the 25th last month. I’m up to a grand total of about $20 for this month so far, which is about a buck a day.
I wouldn’t necessarily call it easy to set up Datomic Cloud (or indeed do anything else on AWS), but it sure is a lot easier than the alternative.#2018-07-2013:25Petrus Theron^ nevermind, I figured out the AWS Marketplace UI is just confusing - it quotes you for all possible components. At the next step, you can specify which Cloudformation to use (Solo or Production).#2018-07-2016:31marshallYes, we are working with AWS to try and make this clearer, but it’s currently the way Marketplace’s UI works#2018-07-2104:10henrikOne of these days, Amazon will decide that they’ve finally amassed enough cash to hire a designer.#2018-07-2015:22donaldballI understand that string tempids are encouraged when building txns these days. When there isn’t a reasonable synthetic id available, I’ve been using (str (d/squuid)). Is that a bad idea?#2018-07-2016:33marshallDon’t think it’s an issue, although it may be a bit heavyweight for what you actually need
You only need as many unique strings as you have tempids in a single transaction - I usually default to the “stringified” version of any unique/identity attr I have#2018-07-2020:33kennyIn order to push an Ion, it has to be a Git repository?#2018-07-2021:03Oliver GeorgeNo. The are benefits though. If not you need to manually name your release. #2018-07-2021:15kennyI tried pushing a non-git Ion and received:
{:command-failed
"{:op :push :uname \"kenny\" :creds-profile \"compute-dev\"}",
:causes
({:message "Shell command failed",
:class ExceptionInfo,
:data
{:args ("git" "status" "-s"),
:result
{:exit 128,
:out "",
:err
"fatal: Not a git repository (or any of the parent directories): .git\n"}}})}
#2018-07-2022:19jaret@U083D6HK9 @U055DUUFS yes, it requires git. It uses git to make the package to push. We’d be interested in hearing feedback if your business or project prohibits the use of git.#2018-07-2022:40Oliver GeorgeI stand corrected#2018-07-2103:56kenny@U1QJACBUM No use case for it. Just surprised me because the docs don't mention it as a requirement.#2018-07-2021:22kennyI am getting a bunch of DEBUG output from AWS and apache when running datomic.ion.dev commands. Is there a way to configure this? I use com.taoensso/timbre with com.fzakaria/slf4j-timbre and configure Timbre in my code. My code is not getting called when running the Ion commands so the log config is not set.#2018-07-2211:48henrik(d/q {:query '{:find [?id ?title (pull ?id [:journal/id])]
:where [[?id :journal/title ?title]]}
:args [(d/db conn)]})`
Gives me the following error:
ExceptionInfo processing rule: (q__1114 ?id ?title ?id), message: processing clause: [?id :journal/title ?title], message: java.lang.ArrayIndexOutOfBoundsException: 2 clojure.core/ex-info (core.clj:4739)
#2018-07-2312:14marshallYou can only have each entity once in a find expression. In your original example, you have ?id and the pull on ?id. You could pull [:journal/title :db/id] if you want to pull both.#2018-07-2312:23henrikAh, yes, I can see ?id appearing twice there. java.lang.ArrayIndexOutOfBoundsException threw me off. Pull looks like a function, so intuition suggests that ?id would be consumed by it and of no concern for the surrounding bits. There’s clearly some magic going on here.#2018-07-2312:23henrikThank you!#2018-07-2312:25marshallNo problem#2018-07-2211:49henrikDropping the initial ?id in the :find clause works fine though:
(d/q {:query '{:find [?title (pull ?id [:journal/id])]
:where [[?id :journal/title ?title]]}
:args [(d/db conn)]}
[["International Bulletin of Mission Research"
{:id [{:identity/type "publisher-id",
:identity/value "IBM",
:db/id 22918220369363020}
{:identity/type "hwp",
:identity/value "spibm",
:db/id 22918220369363024}]}]]#2018-07-2211:50henrikWhy?#2018-07-2307:00Oliver GeorgeThis is the result of an experiment to automate the API Gateway setup required for Web Service Ions via the aws cli (so each new deployment isn't a manual setup task). I'm interested in any feedback (approach, assumptions, implementation...)
https://gist.github.com/olivergeorge/cc0ca9a945cb372d35d97e45573656ee
(updated to tidy up)#2018-07-2312:35steveb8nI did something similar here https://github.com/hlprmnky/ion-appsync-example/blob/master/src-pages/cf/node_pages.clj although not for Ions, instead for CLJS lambdas intended to eventually be the host pages for an Ion backed SPA#2018-07-2312:36steveb8nUsing Crucible and Cloudformation is a pretty nice experience for doing infra as code IMO. I like where all these ideas are taking us#2018-07-2312:54Oliver GeorgeThanks I'll check it out. Crucible is new to me. Still getting familiar with the AWS landscape.#2018-07-2314:59eoliphantGood deal, I’ve been looking at this myself.#2018-07-2315:01eoliphantWe’re more of a terraform shop, but similar idea#2018-07-2316:50cjsauercondensation is also a fun option for writing infra as code in Clojure. I've had good success working this library into certain deployment workflows.
https://github.com/comoyo/condensation#2018-07-2401:34Oliver GeorgeNow that I look at the cloudformation documentation it does seem like generating a cloudformation template is ultimately a simpler solution.#2018-07-2311:19billyrIs excision CPU bound? Does the way I partition transactions matter? I'm excising 150k entities and wondering how long it'll take#2018-07-2312:16marshallThat’s a HUGE excision. Excision is not intended for size control and should be used only when necessary (i.e. legally required to remove something). As a rule of thumb, if you can’t type out the excision transaction manually it’s probably too big to run at once.#2018-07-2311:32val_waeselynck@bill_rubin I think it's not so much about size, more a matter of how much it disrupts your indexes.#2018-07-2311:35val_waeselynckhow long it takes probably depends a lot on your Transactor's hardware, your storage and your network performance characteristics#2018-07-2311:36henrikDoes anyone know where I can find the options available for the :headers field for a web ion?#2018-07-2311:40billyr@val_waeselynck Thanks. It's on a single host and the transactor has been maxing out the cpu overnight so I'm assuming that's the limiting factor. I guess I'll just recreate the db#2018-07-2311:41val_waeselynck@bill_rubin you may want to have a look at this: https://gist.github.com/vvvvalvalval/6e1888995fe1a90722818eefae49beaf#2018-07-2311:42billyr@val_waeselynck Yea that's what I'm doing haha, thanks!#2018-07-2313:28henrikFor a web ion, how can I capture the path, like the /dev/hello/world part of ?#2018-07-2313:30marshallhttps://docs.datomic.com/cloud/ions/ions-reference.html#web-code#2018-07-2313:31marshalli believe it is in the :uri key#2018-07-2313:31marshallyou’d need to parse it yourself#2018-07-2313:36marshall@henrik ^#2018-07-2313:36henrikExcellent, thank you @marshall#2018-07-2315:05eoliphantAlso @henrik, since what’s coming in is basically ring-compatible, you can drop right into one of the various and sundry routing/middleware libs if your use case is non-trivial. I just swapped out some custom code for reitit literally last night#2018-07-2316:40luchiniI’m trying a simple retraction on Datomic Cloud and getting a weird error. The retraction is: [[:db/retract :person/email :db/unique :db.unique/identity]] (yes, the dataset I’m working on does not guarantee unique emails for some bizarre reason).#2018-07-2316:41marshall@luchini that’s a known issue. we’re working on a fix#2018-07-2316:41luchiniDatomic gives me a nth not supported on this type: Db anomaly#2018-07-2316:42luchiniGreat @marshall! Thanks a lot. Do you know if it would work in the very same transaction where I’m in fact asserting duplicate emails, or do I need to keep two separate transactions?#2018-07-2316:42marshallyou’d need to retract the :db/unique first#2018-07-2316:44luchiniWhat about the opposite scenario? (trying to prepare for the future). When I manage to implement a transaction that fixes the dataset in the live system, I’ll need to make sure that I’m updating all duplicate emails in the same transaction I’m adding the :db/unique back in.#2018-07-2316:45luchiniIs that possible?#2018-07-2316:45marshallhttps://docs.datomic.com/cloud/schema/schema-change.html#sec-5#2018-07-2316:46marshall“If there are values present for that attribute, they must be unique in the set of current database assertions.”#2018-07-2316:46marshallso you’d have to update things to make them unique then issue the schema change transaction#2018-07-2316:47luchiniThank you @marshall.#2018-07-2317:54eoliphanthey to the datomic folks, I dropped in a couple feature requests. I’d talked to stu about supporting deploying into existing vpc’s, based on that discussion I know part of the desire is to keep things as contained as possible from a support perspective. But, I think this will be pretty important in the context of datomic cloud’s inevitable massive growth 🙂 Say in our case, we’ve gone from a kind of cheesy, ad-hoc couple vpc’s across two accounts (prod v non-prod) to a 1-1 account/vpc approach for dev,test,etc complemented by shared vpcs for management, ingress etc. with IPSec vpn’s wiring them together. It’s kind of hard to shoehorn extra dedicated datomic VPC’s, per env into this.
ions can complicate this further. Since the code is effectively global, I’d like to give each of my devs their own solo system, then have a common ‘dev’ system that’s updated via CI or something. It’d potentially be more manageable to support multiple systems/vpc.
Just a few thoughts 😉
In the meantime, ions, are fricking amazing 😉#2018-07-2320:33jjfineanyone have any tips on how to write tests for queries that use the :db/txInstant field? i'm having trouble fixturing test data without doing a (Thread/sleep ..) between calls to transact#2018-07-2513:53matthavenerYou can pass a instant to your transaction to arbitrarily set the time of the transaction. All that datomic cares about is that the txInstants are monotonic#2018-07-2320:48kennyWhen testing my Ion endpoint via the Method Test UI in the AWS Console, my response body is base64 encoded. Is there a way to get the UI to display the decoded version?#2018-07-2322:11shaunxcodeis :db/index not supported in datomic cloud? when I try to do (dc/pull db '[*] :db/index) I get #:db{:id nil}#2018-07-2322:51steveb8n@shaunxcode yes, :db/index is not supported https://docs.datomic.com/cloud/schema/schema-reference.html#2018-07-2406:10steveb8nI’m deploying the Specter lib with my Ions. At load time I am seeing a stackoverflow in the logs. I’ll paste it below. This code loads/runs fine on my laptop using the same deps although I do see a deps override warning during push. What is the best way to debug something like this? I’m using Clojure 1.9.0 on my laptop and I presume the same on Ions/Cloud.#2018-07-2414:14stuarthallowayhi @U0510KXTU The Solo template is economized at every level, including having a smaller stack max. I have seen this happen with deep compilation, and (sigh) it can be nondeterministic. AOTing the problem library may help. The problem will definitely go away on Prod.#2018-07-2422:30steveb8nthanks @U072WS7PE that’s good to know. in this case it seems that a classpath issue (still undiscovered) was the real issue, which was then masked by the stackoverflow. maybe there’s greater memory consumption when classpath exceptions occur?#2018-07-2422:32steveb8neither way, some docs on this would be good for others since lots of folks will try ever more libs on Ions/Solo over time. I’m fully sorted now, just got my api working so stoked!#2018-07-2406:10steveb8n{
"Msg": ":datomic.cluster-node/-main failed: java.lang.StackOverflowError, compiling:(com/rpl/specter/util_macros.clj:61:29)",
"Ex": {
"Cause": null,
"Via": [
{
"Type": "clojure.lang.Compiler$CompilerException",
"Message": "java.lang.StackOverflowError, compiling:(com/rpl/specter/util_macros.clj:61:29)",
"At": [
"clojure.lang.Compiler",
"analyzeSeq",
"Compiler.java",
7010
]
},
{
"Type": "java.lang.StackOverflowError",
"Message": null,
"At": [
"clojure.lang.Util",
"equiv",
"Util.java",
33
]
}
],
#2018-07-2406:12steveb8nhere’s the deps I’m using
org.clojure/clojure {:mvn/version "1.9.0"}
com.datomic/client-cloud {:mvn/version "0.8.56"}
com.datomic/ion {:mvn/version "0.9.16"}
org.clojure/data.json {:mvn/version "0.2.6"}
com.rpl/specter {:mvn/version "1.1.1"}
com.stuartsierra/component {:mvn/version "0.3.2"}
com.taoensso/timbre {:mvn/version "4.10.0"}
#2018-07-2407:26steveb8nstrange. I just fixed it but not sure how. I changed some of the dependencies from the push warning. I’ll follow up with more info if I can clarify#2018-07-2407:47henrikI’m now rendering a webpage through an Ion, which is awesome. Http-kit works great for developing the page locally.
- A couple of questions on top of this: how can I set up API Gateway to allow rendering of /?
- What’s the recommended way of serving static content? Should I set a custom domain, create S3 buckets for images, js and css? Or do I serve those directly from the Ion?#2018-07-2409:18henrikI’ve attached a domain to API Gateway. But I’m getting "Missing Authentication Token" for . works fine of course.#2018-07-2409:33henrikAlright, I seem to have figured this one out: create proxy method directly on the root / in API Gateway.#2018-07-2410:38souenzzoCheckout cloudfront.
My app has a /html/render/* that generates the index.html
Static images and js go to s3
Cloudfront do this redirect / ->> api/html/render, /static ->> s3
You can also do others rules #2018-07-2411:04henrikOh, right! I just set up a custom domain directly in API Gateway.
I’ll dismantle that and figure out Cloudfront instead. Thanks!#2018-07-2415:03henrik@U2J4FRT2T I’m having trouble figuring out how to redirect to API Gateway, while allowing to be redirected to s3.
The sources I’m reading are all saying that this only can be done with subdomains.
How do you go about routing / and /static respectively?#2018-07-2415:35henrikThose sources were apparently fallacious! I think I got it.#2018-07-2415:38souenzzoCreate a distribution (on create, you need to assign it to your loadbalancer/apigateway)
in this distribution, create another Origin, assign your S3 bucker.
then create some Behaviors to redirect to each origin.
be careful with caching. "Cache-Control" "max-age=xxx" is your friend. API call's through cloudfront bay not be a good idea. (Unless you REALLY want the caching thing)
You can not do complex regexp on Behaviors.#2018-07-2416:51henrik@U2J4FRT2T Do you have to do anything special with Route 53 when associating the domain name? The domain redirects to the API Gateway <gunk>. rather than hiding it.#2018-07-2416:52souenzzojust alias on r53
it will probably offer you this endpoint as an option for the alias#2018-07-2416:53henrikRight. But it does a redirect, so the raw API Gateway URL ends up exposed to the user.#2018-07-2417:39henrikThe path pattern for s3, should it be for example /static/*?#2018-07-2418:07souenzzoYep. that simple patterns are ok.
But at first, i tryied to write "anything that ends with 'dot' + 2 or 3 alphaletters" but that regexp engine dont accept that kind of pattern#2018-07-2506:16henrikI could not for the life of me get Cloudfront to alias instead of redirect, so I ripped it apart and set up S3 to be accessed through the API Gateway.#2018-07-2506:16henrikThen, just for the heck of it, I set up Cloudfront, and for some reason it’s no longer redirecting, but aliasing properly. But now API Gateway is already handling the S3 stuff.#2018-07-2506:17henrikI guess the downside is that I can’t control the caching strategy for the static assets separate to the caching strategy for API Gateway.#2018-07-2410:35staskquestion about datomic client (not cloud) and peer server. when calling (first (d/tx-range conn {:start 1000 :end 1001})), i’m getting exception like this:
Datomic Client Exception
{:cognitect.anomalies/category :cognitect.anomalies/fault,
:datomic.client/http-result {:status nil, :headers nil, :body nil}}
The peer server log has following warning:
2018-07-24 10:32:33.112 WARN default datomic.cast2slf4j - {:msg "Could not marshal response", :type :alert, :tid 12, :timestamp 1532428353111, :pid 1560}
java.lang.RuntimeException: java.lang.Exception: Not supported: class clojure.lang.Delay
at com.cognitect.transit.impl.WriterFactory$2.write(WriterFactory.java:150) ~[transit-java-0.8.311.jar:na]
at cognitect.transit$write.invokeStatic(transit.clj:149) ~[datomic-transactor-pro-0.9.5661.jar:na]
at cognitect.transit$write.invoke(transit.clj:146) ~[datomic-transactor-pro-0.9.5661.jar:na]
at cognitect.nano_impl.marshaling$transit_encode.invokeStatic(marshaling.clj:59) ~[datomic-transactor-pro-0.9.5661.jar:na]
...
#2018-07-2410:35staskis tx-range not supported in datomic client with peer server?#2018-07-2415:00rhansenHmm... What does this error message mean? tempid used only as value in transaction#2018-07-2415:01rhansenDoes it mean that I have a tempid somewhere which isn't used in as a value for db/id?#2018-07-2415:02rhansenAlso, is it possible to figure out which tempid it is refering to? I have a pretty big transaction 😕#2018-07-2415:02donaldballI believe that means you’ve asserted an entity that has no attributes.#2018-07-2415:05donaldballProbably you could filter the txn for a map that only has a :db/id key.#2018-07-2415:05rhansenhmm, ok#2018-07-2420:30rhansenThe problem was a typo somewhere in my code. 😛#2018-07-2420:31rhansenWould've been much easier to find if the error message included which tempid caused problems though 🤔#2018-07-2500:30Oliver GeorgeAWS CloudFormation newbie question. I'm experimenting with setting up a apigateway via a cloudstack template. There's one magic number... the CodeDeployDeploymentGroup. I think I could use Fn::ImportValue to read this from the datomic cloud cloudstack if it included an Export for the associated Output.
"CodeDeployDeploymentGroup": {
"Description": "CodeDeploy Deployment Group",
"Value": {
"Fn::GetAtt": [
"Compute",
"Outputs.CodeDeployDeploymentGroup"
]
}
},
Could become
"CodeDeployDeploymentGroup": {
"Description": "CodeDeploy Deployment Group",
"Value": {
"Fn::GetAtt": [
"Compute",
"Outputs.CodeDeployDeploymentGroup"
]
},
"Export": {
"Name": {
"Fn::Sub": "${SystemName}-CodeDeployDeploymentGroup"
}
},
},
(or similar, from that template i think you'd use "Ref": "AWS::StackName" as the system name component)
The alternative seems to be modifying the root stack to reference my app specific apigateway stack. Not sure if that's normal or recommended and how that might interplay with datomic cloud updates.
Question really is: am I missing something?#2018-07-2502:30steveb8nNot answering your question but you might consider using Crucible instead to generate your templates. Even just having functions available makes it a lot easier. Here’s an example https://github.com/hlprmnky/ion-appsync-example/blob/master/src-pages/cf/node_pages.clj#2018-07-2502:36Oliver GeorgeThanks @U0510KXTU I thought I'd aim for zero helpers/libs/tooling first to get familiar with what's underlying things. Presume it's something I'd outgrow. Look forward to checking out your code and understanding how it's helpful.#2018-07-2502:39steveb8nthat makes sense. there are examples of refs in there, in Crucible it’s the xref fn#2018-07-2502:40steveb8nand they can be parameters from the command line or CF “env” values e.g. region#2018-07-2502:40steveb8nmaybe looking at how those CF value/fns are built will get you closer to how to infer/import the group you are trying to access#2018-07-2502:42Oliver GeorgeHere's the simple template I came up with. Effectively what would be generated from following the ion-tutorial (but not using {proxy+} so slightly simpler)
https://gist.github.com/olivergeorge/c3918c52b89278a9c1807c9d47a9860e#2018-07-2502:43Oliver GeorgeI used a Parameter since the datomic stack doesn't export the compute group name... if they did the ImportValue thing should do the trick .#2018-07-2502:46Oliver George@U0510KXTU that json feels very similar to your code doesn't it.#2018-07-2502:47steveb8nyep. the only downside I’ve noticed is refs are string joins which are a bit more complex#2018-07-2502:52Oliver GeorgeIn your experience, what approach would you use for setting up an apigateway to complement a datomic cloud app (with ions)? I'm largely guessing but the options seem like:
(1) a "nested stack" approach provides access to the compute group name and connects the datomic stack lifecycle events to the apigateway stack.
(2) treat as stand alone cloudformation and refer to the compute stack by the known group name
(3) other.. (aka I need to learn more about cloudformations and devops practices on AWS)#2018-07-2502:59steveb8nI’ve already done this 🙂 I used Crucible to generate the APIGW and passed in the name of the compute stack as a parameter so that my fns can generate the AWS ARNS using string joins#2018-07-2503:01Oliver GeorgeGotcha thanks (and cool!)#2018-07-2504:13eoliphantHi, I’m getting a weird error when I try to retract a unique constraint on an attribute
(d/transact conn {:tx-data [[:db/retract :otu-seq/otuid :db/unique :db.unique/identity]]})
ExceptionInfo nth not supported on this type: Db clojure.core/ex-info (core.clj:4739)
#2018-07-2504:24eoliphantnvm just saw the other comments about it#2018-07-2510:10henrikI just added ring as a dependency to my Datomic/Ions project, and now I get this:
Refresh Warning: Reflection warning, cognitect/hmac_authn.clj:80:12 - call to static method encodeHex on org.apache.commons.codec.binary.Hex can't be resolved (argument types: unknown, java.lang.Boolean).
Refresh Warning: Reflection warning, cognitect/hmac_authn.clj:80:3 - call to java.lang.String ctor can't be resolved.
Removing ring from deps.edn removes the error.
Has anyone else seen this?#2018-07-2512:54rhansenyes#2018-07-2512:54rhansenIt's just a reflection warning though. No biggie#2018-07-2513:30ninjaHi, rather short question:
is it possible to transact multiple values for a :db.cardinality/many attribute using the list form?
Something along those lines:
[[:db/add "my-ident" :foo/bar-refs [ref-ident-1 ref-ident-2]]]
#2018-07-2513:54eraserhdI think you can't do that here, but you can in map form. If you think about it, this is ambiguous. ref-ident-1 could be a keyword and ref-ident-2 could be a value, making the inner vector an entity reference.#2018-07-2513:58marshalltry [[:db/add "my-ident" :foo/bar-refs [[ref-ident-1 ref-ident-2]]]] @atwrdik#2018-07-2514:03ninja@marshall following this example i got an invalid list form error (the same happens using my example above)#2018-07-2514:04marshallerm. right; the list form is one datom per vector I believe#2018-07-2514:05marshallyou can transact multiple vals in map form#2018-07-2514:05marshallor you can use multiple individual vectors in list form#2018-07-2514:05ninjaThe explanation from @eraserhd makes sense to me. But I'm still curious how to add multiple refs without using the map form. Would one just write something like this:
[[:db/add "my-ident" :foo/bar-refs ref-ident-1]
[:db/add "my-ident" :foo/bar-refs ref-ident-2]]
#2018-07-2514:06marshallyep#2018-07-2514:06ninjagreat, thx guys#2018-07-2516:27curtosisare Tim Ewald’s code examples for his reified transactions talk from DoD 2015 still available anywhere? The gist has understandably evaporated.#2018-07-2610:19octahedrionI really wish I'd added a :db/unique :db.unique/identity to an attribute but it's too late as there are multiple values in the current set of database assertions -- I tried retracting all but one of those assertions but to no avail, is there anything I can do ?#2018-07-2610:26chrisblomhave you seen https://docs.datomic.com/cloud/schema/schema-change.html#sec-5?#2018-07-2610:27chrisblomdoes you attribute use :db.cardinality/one?#2018-07-2610:28octahedrionyes, but the 2nd condition in the green box is not met#2018-07-2610:28chrisblomis there any reason you cannot remove the duplicate values?#2018-07-2610:28octahedrionas I said - I tried retracting them#2018-07-2610:30chrisblomand the values are unique afterwards?#2018-07-2610:33steveb8nHas anyone setup CI to push/deploy Ions yet? If so, anything to watch out for? How do you do auth for the CLI in the CI env?#2018-07-2611:55octahedrionok - I think I've found a way: I renamed the offending attribute :old-attribute-name and asserted the attribute again with the unique constraint, which works, thereafter one has only has the small inconvenience of having to specify the attribute in one's queries (to prevent assertions for the old one appearing)#2018-07-2611:56octahedrionand naturally I have to assert the latest values of the old attribute on the new one#2018-07-2611:56octahedrionbut that's ok#2018-07-2617:57curtosisreally dumb question, but I’m drawing a blank today: how do you programmatically build a query that takes a UUID string as parameter?#2018-07-2617:59octahedrion(d/q '{:find [?n] :in [$ ?uuid] :where [[?n :uuid ?uuid]]} (d/db conn) uuid)#2018-07-2618:00octahedrion- programmatically manipulate the map as you wish#2018-07-2618:02curtosislooks like what I’m trying, but that doesn’t work. I can run it in the console with [?e :org/id #uuid "string"] , but without the reader tag in my query it won’t match.#2018-07-2618:03octahedriontry (UUID/fromString uuid-string)#2018-07-2618:07curtosisI think that’s what I’m looking for, but somehow that’s not working.#2018-07-2618:08curtosis(d/q '[:find ?org .
:in $ ?orgId
:where [?org :organization/id (UUID/fromString ?orgId)]]
db orgId )#2018-07-2618:09octahedriondo the UUID/fromString outside the query#2018-07-2618:10octahedrionoutside the :where clause I mean#2018-07-2618:10octahedriondb (UUID/fromString orgId)#2018-07-2618:10octahedrionor pass in a UUID not a string#2018-07-2618:10curtosisright. That works! Thanks!#2018-07-2618:11octahedrionbetter to pass in UUIDs#2018-07-2618:12curtosisunfortunately coming in from graphql /js so it’ll be a string, but easily managed.#2018-07-2618:12octahedrionconvert elsewhere before using in query#2018-07-2618:13octahedrioncleaner#2018-07-2618:14Peter Wilkinsstillsuit has a custom scalar for that https://github.com/workframers/stillsuit/blob/51064573edab7a3f03f54f23c632aeb87f243fa4/resources/stillsuit/base-schema.edn#L40#2018-07-2618:17curtosishmmm… wonder why stillsuit isn’t picking it up right then#2018-07-2618:18Peter Wilkinsshould probably move to graphql channel?#2018-07-2618:19curtosisyup#2018-07-2618:21Peter WilkinsI’m having trouble getting a postgres backend setup. the jdbc uri looks ok but when I try to backup from s3 computer says no
bin/datomic -Xmx1g -Xms1g restore-db 'jdbc:'
java.lang.IllegalArgumentException: :db.error/invalid-db-uri Invalid database URI jdbc:
#2018-07-2618:35curtosis datomic:sql://{db-name}?{jdbc-url}#2018-07-2618:36curtosisand IIRC the jdbc-url has to be URL-encoded#2018-07-2618:42Peter Wilkins:+1: solved it - was missing the datomic:sql://? before jdbc…#2018-07-2619:06Peter Wilkinsargg. I managed to restore the database under the name '' (empty string) and I can’t restore it again. Struggling to delete or rename it. :restore/collision The database already exists under the name ''#2018-07-2619:07marshallyou can delete the postgres table and recreate it#2018-07-2700:10rhansenAfter following the tutorial I just get connection refused when trying out api gateway with a ring handler 😕#2018-07-2700:10rhansenanyone experience that?#2018-07-2703:15kennyIs there a way in code to tell if an Ion is deployed or not? Curious how others are handling configuration in dev vs prod.#2018-07-2703:34steveb8n@kenny you could try invoking it using AWS CLI. that would verify it’s deployed. but it doesn’t seem like this is really your question. can you elaborate?#2018-07-2703:35steveb8nI’m curious because I’ll be setting up the same environments in the coming weeks#2018-07-2703:36kennyMy application configuration depends on the environment it is running in (dev/qa/prod). I don’t see anyway to parameterize the deployment like that. #2018-07-2712:41jaretWe have a release in the works to deliver params for deployment. We’re currently working with AWS to get it out. I don’t have a timeline, but we’re going to be delivering parameterized deployment.#2018-07-2703:45steveb8nI’m wondering about that also. hence my earlier question about CI setup. If you look at the Ion lambdas there are env variables there so that seems like a good way to do this but not sure how to populate those from code.#2018-07-2704:37steveb8nit seems like the AWS Params store is part of the answer for this https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html#2018-07-2713:39stuarthalloway@U0510KXTU stay tuned 🙂#2018-10-0314:37jaretHey @U380J7PAQ could we move this conversation to a ticket? I’d also like to see if we could get read-only access to your Cloudwatch logs to look closer at your inability to connect. That’s better done over a ticket.#2018-10-0314:37jaretIf you can give me a good e-mail address, I can start a case and copy this conversation into a ticket for us.#2018-10-0315:01eoliphanthi @U1QJACBUM <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> is good#2018-10-0315:02eoliphantThe stack is being upgraded to the latest, so e#2018-10-0315:02eoliphantwe'll try again after that#2018-10-0315:10jaretGreat! I created a case and sent you an e-mail with instructions for a ReadOnly Cloudwatch account in the event you’re able to provide us access and you still can’t connect on the latest stack.#2018-10-0120:57marshallcan you define “good chunk”#2018-10-0120:58marshalland what does your dashboard show?#2018-10-0210:08jarppeIs there a way to use client api and have dynamically created in-memory databases for testing?#2018-10-0210:10jarppeIn testing it has been really nice to have an in-memory database created in test fixture, but I guess that is not possible if we use the client api#2018-10-0210:39steveb8n@jarppe yes! if you want to roll your own you can start with https://gist.github.com/stevebuik/9b219090a2d10cc4fb06d62ee928ca7e#2018-10-0210:40steveb8nor for a more refined solution https://github.com/ComputeSoftware/datomic-client-memdb#2018-10-0210:40steveb8nI rolled my own because I wanted interceptors in that layer as well. I have not tried the OSS lib#2018-10-0210:41Hadii have a query like that and im using datomic free. i want to display tx time with given entity id. but i currently only have tx id as the result. is it possible to get the time values? (FYI datomic pro can use :db/txInstant to make the time )#2018-10-0210:41Hadii have a query like that and im using datomic free. i want to display tx time with given entity id. but i currently only have tx id as the result. is it possible to get the time values? (FYI datomic pro can use :db/txInstant to make the time )#2018-10-0213:28favilaI'm not aware of any limitation in datomic free except for peer counts. It should just work. Try it.#2018-10-0213:29favilaYour query could be rewritten as#2018-10-0213:30favila(d/q
'[:find ?e ?attrname ?v ?txinst ?added
:in $ ?e
:where
[?e ?a ?v ?tx ?added]
[?a :db/ident ?attrname]
[?tx :db/txInstant ?txinst]]
(d/history (db))
eid)#2018-10-0304:11Hadithankyou. after i re-run the repl it solves the db/txInstanst. maybe its a bug#2018-10-0211:40jarppe@steveb8n Great! Precisely what I'm looking for, thanks.#2018-10-0215:47kennyI wrote that lib because we needed it for probably similar reasons that you need it. LMK if you have any questions.#2018-10-0213:00steveb8nQuestion: looking at this lib I learned about all the different kinds of uuids. In my ion code I'm simply using java.util.UUID but I'm wondering if there's any value in using other uuid types? Any uuid experts out there?#2018-10-0213:00steveb8nQuestion: looking at this lib I learned about all the different kinds of uuids. In my ion code I'm simply using java.util.UUID but I'm wondering if there's any value in using other uuid types? Any uuid experts out there?#2018-10-0213:32favilaName-based uuid (version 5) has some nice properties#2018-10-0213:32favilajava.util.UUID can represent all uuid versions, though, it's not a matter of needing a different type#2018-10-0214:20steveb8nCool. It's this lib https://github.com/danlentz/clj-uuid/blob/master/README.md#2018-10-0215:14jarppeI agree with Francis, v5 UUIDs are great. I use them when I need a to map something like user ID to UUID so that the same ID always maps to same UUID.#2018-10-0216:28steveb8nThat is a good tip. I can imagine some use cases e.g. saving a lookup by name for entities from a db when the name is immutable. Are there other less obvious scenarios where it's handy?#2018-10-0217:43jarppeThis is not relevant to datomic, but I used it when I generated a test fixture to MongoDB, basically exactly what temp-id's are in Datomic.#2018-10-0218:33favilacan also use v5 uuids for key-by-value situations#2018-10-0218:33favilacompound indexes, hashes, etc#2018-10-0218:48steveb8nI had not thought of the compound index. I'll try that in datomic. Thanks!#2018-10-0218:50favila(it won't be a sorted index)#2018-10-0307:03steveb8nGood point. I'll stick http://to.my txn fns for that#2018-10-0216:13donaldballHey, when applying a txn to a dev transactor earlier, my coworker hit this error:
WARN [2018-10-02 11:59:52,036] clojure-agent-send-off-pool-8 - datomic.connector {:message "error executing future", :pid 72084, :tid 243}
org.apache.activemq.artemis.api.core.ActiveMQObjectClosedException: AMQ119017: Consumer is closed
at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.checkClosed(ClientConsumerImpl.java:962)
Google reveals scant. Any ideas what’s up?#2018-10-0217:44jarppeWhat about transaction functions with client api? What's the client api counter part for datomic.api/function?#2018-10-0217:59kennyhttps://docs.datomic.com/cloud/transactions/transaction-functions.html#custom#2018-10-0218:18jarppeThat's for cloud and says that only the built in functions and classpath functions are supported.#2018-10-0218:19jarppeDoes this mean that the cloud api does not support functions like peer api with datomic.api/function does?#2018-10-0218:19jarppeThe documentation is.... vague#2018-10-0219:16rgorrepatiHi, I ran into the same issue as @donaldball.. Deja vu there, almost happened at the same time, and we are not co-workers 😉#2018-10-0219:16rgorrepati[clojure-agent-send-off-pool-803] WARN datomic.connector - {:message “error executing future”, :pid 43, :tid 87854}
org.apache.activemq.artemis.api.core.ActiveMQObjectClosedException: AMQ119017: Consumer is closed
at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.checkClosed(ClientConsumerImpl.java:962)
at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.receive(ClientConsumerImpl.java:194)
at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.receive(ClientConsumerImpl.java:406)
at datomic.artemis_client$fn__1607.invokeStatic(artemis_client.clj:169)
at datomic.artemis_client$fn__1607.invoke(artemis_client.clj:162)
at datomic.queue$fn__1363$G__1356__1368.invoke(queue.clj:18)
at datomic.connector$create_hornet_notifier$fn__7866$fn__7867$fn__7870$fn__7871.invoke(connector.clj:195)
at datomic.connector$create_hornet_notifier$fn__7866$fn__7867$fn__7870.invoke(connector.clj:189)
at datomic.connector$create_hornet_notifier$fn__7866$fn__7867.invoke(connector.clj:187)
at clojure.core$binding_conveyor_fn$fn__4676.invoke(core.clj:1938)
at clojure.lang.AFn.call(AFn.java:18)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)#2018-10-0219:17donaldballha ha nice#2018-10-0219:18donaldballIn our case, we’ve tentatively discovered that batching the forms of the txn into separate txns gets it to transact. Unfortunately, the original txn is only 77 forms, like 27k in size, not especially large, so it’s a little bit surprising.#2018-10-0219:26donaldballIt’s unsettling to note that downgrading from java10 to java8 fixes the problem#2018-10-0219:33donaldballSpecifically, downgrading the peer from java10 to java8 fixes the problem. Are there known issues with datomic peer and java10?#2018-10-0220:28rgorrepati@donaldball Do you mean to say you can reproduce it reliably?#2018-10-0220:29donaldballIt seems so, yes.#2018-10-0220:34rgorrepati@donaldball I was under the impression it is a connection issue between peer and transactor or peer and storage#2018-10-0221:00johnjon-prem has some very very old deps#2018-10-0221:01johnjusing anything past java 8 is asking for trouble#2018-10-0223:20kennyMy datomic storage stack pretty consistently fails to delete due to the Vpc not deleting. If I manually go into my VPCs and delete the datomic-created VPC, it works. This is pretty annoying. Is there a fix for this?#2018-10-0314:08stuarthallowayHi @U083D6HK9. We do not consider deleting a storage stack to be part of any regular workflow, so I am curious why you are doing this?#2018-10-0315:27kennyWe allow our developers to provision Datomic Cloud stacks when they need them. They then delete them when they no longer need them anymore. We end up with lots of stale VPCs and failed stack deletions. Also, because it is launched via a CloudFormation template, one would only expect for it to work with the regular CloudFormation operations, including Delete Stack.#2018-10-0316:26kenny@U072WS7PE a lot of those headaches would be mitigated if devs have a local instance to work against. That being said, the Datomic CFT deletion should work as expected.#2018-10-0316:28stuarthallowayI totally agree, and we test deletion in our regression suite. Can you send us more information about the error you are seeing?#2018-10-0316:28stuarthalloway@U083D6HK9 are developers recreating storage stacks against existing storage?#2018-10-0316:29stuarthallowayor have you handrolled something to deal with all of https://docs.datomic.com/cloud/operation/deleting.html#deleting-storage ? We left this manual on purpose to discourage people from deleting their data.#2018-10-0316:30kennyThe Events tab in the CF UI says:
> The vpc 'vpc-0931d229f45a061a1' has dependencies and cannot be deleted. (Service: AmazonEC2; Status Code: 400; Error Code: DependencyViolation; Request ID: 8d2f7d93-f945-4002-b424-c47aba885b04)
and then DELETE_FAILED:
> The following resource(s) failed to delete: [Vpc].
New storage each time. We have a custom script to delete all those resources. I'm planning on adding it to a public gist as I'm sure others have this workflow as well.#2018-10-0316:30stuarthallowayWhy not leave the storage stack up all the time, and just provision compute when needed? That is certainly what we do internally.#2018-10-0316:31stuarthalloway@U083D6HK9 the whole architecture is designed so that you can leave storage up and just reconnect to it. Is there some benefit to doing this extra work that I am not seeing? If there is some isolation we fail to support I would like to make it first class.#2018-10-0316:31kennyBecause developers want to ensure they are working in a clean environment with empty DBs.#2018-10-0316:32kennyWe at first tried the approach of suffixing DBs with UUIDs, but that became a real pain.#2018-10-0316:33stuarthallowayOK, that is good input, thanks! Will discuss with the team.#2018-10-0316:37stuarthalloway@U083D6HK9 Do you take a similar approach with AWS resources, e.g. automating the creation of 1-off DDB, S3, etc. as needed?#2018-10-0316:38kennyYes. We use Pulumi (similar to Terraform) which has the concept of a stack. A stack consist of any number of resources. When created, a stack provisions all the resources with a unique name. This allows us to spin up entire instances of our infrastructure for any given environment: prod, dev, qa, kennys-prod-test, etc.#2018-10-0316:39stuarthallowayare the unique names Pulumi makes better than DB+UUID suffix in some way?#2018-10-0316:48kennyYes. It makes our dev workflow much easier. As an example, our application calls for DBs named accordingly: admin, customerdb-<cust UUID>1, customerdb-<cust UUID>2, ..., customerdb-<cust UUID>N. When working in a REPL, we know that all we need to do is connect to the admin DB, not admin-<UUID>. We ended up writing a wrapper for the connect function that auto-suffixed the DB name with the current UUID suffix. But then it became a problem when you wanted to run a development system (i.e. http server for UI dev) and run your tests using a clean DB all in the same REPL. This was the primary motivator for writing https://github.com/ComputeSoftware/datomic-client-memdb. The workflow we followed when working with the peer library was identical and it worked really well. We didn't need to think about what the current binding for the db-suffix was.#2018-10-0316:51kennyUltimately it boiled down to less code we need to maintain. Writing and testing code against the peer library was intuitive and easy.#2018-10-0316:52stuarthallowaythanks @U083D6HK9! This is very helpful input.#2018-10-0314:13luchiniSuper dumb question: on Datomic Cloud, do system names have to be globally unique? By “globally” I mean unique even across completely different AWS accounts.
Context: I have a stack failing to create and the only thing that seems to be a potential source of confusion is that I’m recycling a system name I had used in a different AWS account.#2018-10-0314:17eoliphantdo you know where it's failing? something like a collision on the S3 bucket name might be an issue#2018-10-0314:19stijn@luchini: we have datomic systems in different accounts with the exact same name, so I think the answer is no, but if you try to recreate one in the same account with the same name, it's a bit more work if you don't want to reuse the existing storage#2018-10-0314:58Joe Lane@luchini I’ve had stack creations fail at various times with brand new names as well.#2018-10-0315:22luchini@eoliphant this is what we got "The following resource(s) failed to create: [CodeDeployApplication, LoadBalancer, DatomicCodeBucket, HostedZone, BastionInstanceProfile]."#2018-10-0315:24luchiniThanks @stijn and @lanejo01… I’ll try a few more times. Thanks for disproving my theory 😄#2018-10-0315:50marshall@luchini there are some resources that don’t get destroyed when you delete a stack. you’ll want to look in the tag editor to search for anything from that system name to delete it explicitly#2018-10-0315:51marshallhttps://docs.datomic.com/cloud/operation/monitoring.html#tags#2018-10-0316:06luchiniThanks @marshall#2018-10-0317:22kennyWe automate the creation and deletion of Datomic Cloud stacks. As you probably know, deleting a Datomic stack does not delete all the resources that the stack created. You need to follow these steps to entirely delete the stack https://docs.datomic.com/cloud/operation/deleting.html#deleting-storage. We wrote this clj script to automate the deletion of all the resources a Datomic Cloud system creates and thought the community may find it useful. https://gist.github.com/kennyjwilli/55007211070a260044c8e6abcb54dd5b.#2018-10-0318:08stijnI think I also had to delete an IAM policy (datomic-admin-datomic-eu-central-1), which isn't mentioned in the docs @marshall#2018-10-0318:09okocimAre there any recommendations around modeling schemas for dealing with many-to-many relationships? I’m trying to match up information to a main entity from three different data sources, and each of the integrations has their own id for the main entity. However, there is some ambiguity in the matching such one id from a given data provider can be many ids from another provider. I’m trying to determine whether it’s better to model as refs with a cardinality of many on the main entity, or if I should create ‘linkage’ records that are effectively tuples of the 3 ids that might go together. At the end of the day, I have to pare down the results so that the main entity has exactly one reference to the data from each of the other integrations, by doing a ‘best match’ with some code logic.
All of that is admittedly a bit abstract, so I guess my question boils down to whether there are any recommendations for doing many-to-many relationships among more than two entities using refs or values.#2018-10-0518:20eoliphantwell the good thing about datomic is that you can pretty easily experiment, given the 'universal relation' there's frequently no one correct way.
both of your ideas seem workable, the 'right' answer is probably going to be more dependent on the specifics of the resultant queries, etc. The many cardinality thing seems like a good idea to start, you could even perhaps model it in stages. Where you 'promote' it once you've done your paring down process#2018-10-0408:25staskhi, is there a limitation for number of databases in single datomic cloud system?#2018-10-0408:52steveb8nI'd also like to know this limit.#2018-10-0420:29stuarthalloway@U11FG9Z7Z and @U0510KXTU there is no fixed limit, but the thought (and testing) is around a fairly small number#2018-10-0420:30stuarthallowayi.e. database per customer would be a problem for most customer bases#2018-10-0420:30stuarthallowayWhere would the limit (if there was one) impact a design decision?#2018-10-0500:31steveb8nI considered 1 per customer but avoided it after thinking it through. For me this was just for interests sake so no effects on my design#2018-10-0510:12staskI’m thinking about having a database per customer (each customer has multiple users, so i’m talking about up to a thousand databases per system).
It will simplify things like moving customer data between systems (for example when moving a customer from US region to EU region) or removing customer’s data from the system.#2018-10-0522:38steveb8nAgreed. 1 per customer makes org delete easier. I'm betting that there will be a better excision solution (in cloud) in future to address this. If delete could be done easily, would you still choose 1 db per customer? I'm curious#2018-10-0611:47staskIt would be still simpler to have db per customer for moving between systems.#2018-10-0413:46khardenstineIs on-prem ok with java9+? I finally got around to updating my dev environment from jdk8 to 11 and im getting netty illegal reflection warnings with datomic free on startup#2018-10-0413:56donaldballIf you scroll up, you’ll see that a couple of us have discovered a substantive problem with standard datomic peers in jdk10 running at least against a dev mode transactor. I have yet to confirm the problem against a production transactor, or to reduce the problem to a simple test case, alas.#2018-10-0414:11khardenstineOoph thats disheartening. Thanks#2018-10-0414:14donaldballIt’s worth noting that we could be outliers and Cognitect is committed to on-prem moving forward, so you may want to have a conversation with their sales or support folk!#2018-10-0423:42johnjCurious, how do you know Cognitect is commited to on-prem?#2018-10-0501:11donaldballhttps://www.reddit.com/r/Clojure/comments/9gmss7/rich_hickey_on_datomic_ions/e696nk0/#2018-10-0416:42kennyIs there a way to pass datomic.ion.dev/push and datomic.ion.dev/deploy the ion-config map as a data structure and not have them implicitly read it in from a classpath location?#2018-10-0420:28stuarthalloway@U083D6HK9 no -- the config map becomes part of the deployed artifact, so reproducibility would take a hit if was in a command#2018-10-0420:30kennyWe generate the ion-config.edn dynamically based on our system configuration so the Ion config is not checked into version control. Not sure how reproducibility takes a hit.#2018-10-0418:44dfcarpenterI'm just starting out with datomic local dev and trying to connect with a client in the repl. When I try to setup the connection I get the error CompilerException clojure.lang.ExceptionInfo: Datomic Client Exception {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :datomic.client/http-result and i'm not sure why. I have the transactor running, I created a database, and I have the peer server running as well#2018-10-0420:45dfcarpenterRealized I made a mistake when starting the peer. All good now.#2018-10-0420:46dfcarpenterCan anyone point me to open source clojure codebases which use dataomic. Im struggling to learn good schema design approaches#2018-10-0421:23eoliphantHi, I'm trying to optimize a query I have. It's relatively straightforward, I have a tree-structure and a recursive rule that says for a given node, I'll match on a parent of a given type. Works fine, returns in around 30ms for a single one. I'm using a collection binding for the node id, and the response time appears to be more or less linear, which is not a big deal for a few of them, but I just ran into a test case where it needs to match ~1000 and it's taking about a minute to complete. any ideas/suggestions on speeding this up?#2018-10-0501:12pvillegas12You can change your data shape to a flat parent child mapping (repeats data obviously) for querying#2018-10-0508:02eraserhdNot sure whether this applies to you, but I had tree structured data where I queried whether two nodes were in the same tree. Rephrasing it to whether two nodes have the same root made it much better.#2018-10-0508:04eraserhdAlso, I have some super complicated queries running all together in under 18ms. They jump to 30ms when Datomic can’t cache the compiled query. This happens when (not= (hash query) (hash (read-string (pr-str query)))).#2018-10-0508:05eraserhd(Regular expressions are the usual culprit.)#2018-10-0514:59eoliphantFigured it out. One of my dev's had gotten a bit too modular with his rules lol. I rewrote it more simply, and it's 20x faster and no longer displays that linearity#2018-10-0500:47eoliphant@dfcarpenter have you looked at the mbrainz db?#2018-10-0502:32dfcarpenter@eoliphant I will take a look#2018-10-0502:32dfcarpenterThanks#2018-10-0503:50dfcarpenterHow do I turn off the logging in the repl when running datomic?#2018-10-0504:16csm(.setLevel (org.slf4j.LoggerFactory/getLogger "datomic") ch.qos.logback.classic.Level/WARN) may do it#2018-10-0513:06marshall@dfcarpenter Datomic on-prem? You can edit your logback.xml to adjust level and logger target#2018-10-0515:27Andreas LiljeqvistCan I check if an input has a specific attribute - That is give entity ?e as input arg and return ?match if attribute ?e :whatever#2018-10-0515:28Andreas LiljeqvistWhere (= ?match ?e)#2018-10-0517:11okocimdid you figure out what you need here? I’m not exactly sure what you’re going for, but I think you can do it more simply than by using the ‘=’ predicate#2018-10-0515:33Andreas LiljeqvistI want do something like (d/q '[:find [?match ...] :in $ [?e] :where [?e :schema/type :logger] [(= ?e ?match)])#2018-10-0515:43Andreas LiljeqvistIt can be done in an easier way just by using d/entity and filter, but still?#2018-10-0515:58Andreas Liljeqvistidentity can be used to force the var#2018-10-0516:50eraserhdWhat does it mean when a peer logs at INFO "datomic.kv-cluster: {:event :kv-cluster/get-pod-meta, :pod-key "...", ...} with the same pod-key lots and lots and lots of times in a row?#2018-10-0517:30okocimHas anyone found a preferred way to compose or at least parameterize pull expressions in queries? I find using syntax quote to be a bit awkward, because of the need to var-unquote all of the other symbols in the query with ~'#2018-10-0517:58eraserhdI was doing this for a while with clojure.walk/postwalk. Just something like (clojure.walk/postwalk #(get params % %) expr), and then you can replace some symbol with a value.#2018-10-0517:58spiedeni just quote the individual things that need it like '*#2018-10-0517:59eraserhdOf course, use rules first, if you can. And pass extra parameters to q and bind with :in. if you can.#2018-10-0518:17okocimThanks for all of the replies. I think I’ll try the postwalk approach. This may be a bad idea, but I’m thinking of using a symbol prefixed with ! (e.g. !customer-with-address) and postwalk to replace those from a params list#2018-10-0518:25khardenstineYou can just pass pull expressions as inputs to your query:
(d/q ‘[:find (pull ?attr my-pull) .
:in $ my-pull
:where [:db.part/db :db.install/attribute ?attr]]
$ [’* :db/doc])#2018-10-0518:28okocimoh, well thanks. I think that’s the best one of all 🙂#2018-10-0521:21kennyIs it ok for me to add my own keys to datomic/ion-config.edn?#2018-10-0521:54kennyI performed a parameter upgrade on my Datomic cluster and it has been updating for the past 25 mins. Is this normal?#2018-10-0521:59kennyAh, it's because my Ions threw an exception. Seems like that should fail the parameter upgrade.#2018-10-0608:12Petrus TheronHey Guys 🙂 I need some advice on reusing the Datomic transaction format (or at least EDN) for serializing power station measurements coming out an STM32 microcontroller's serial bus (via an FTDI chip) that will remain human-readable while gaining machine-readability. I looked at the Datomic Cloud wire protocol, and I wanted to ask about the evolution of the design to save myself future hassle.
At the moment, the controller is spitting out a string like:
START;
Voltage Reading: 3.3V
Current Reading: 25mA
Last start time:
...more readings
END;
Which I'd like to replace with something more extensible and sane, like:
[{:my-company.v1/input-voltage 3.3M
:my-company.v1/timestamp #inst 12312312312
:my-company.v1/output-voltage 5.5M
:my-company.v1/cell-current 0.00005M
:my-company.v1/started-at 1231313123
...} {:my-company.v2/input-voltage 3.36M, ...}]
Specifically, I see that the Datomic Cloud protocol has a tx-data {:tx-data [tx1 tx2 ...]} key when passing txs around.
Any advice on growing the schema (flat vs. nested values), versioning? Stream-ability of the feed? Pub/Sub considerations?#2018-10-0700:53grounded_sageI’ve got an app which I would like to prototype with Datomic Cloud Ions. Though it is a chat not so I’m curious what the cold start/latency story is like. I’m having trouble finding information regarding this. Datomic Ions may not be suitable which is fine but if it is ok then I would prefer to use it. #2018-10-0721:47dominicmThe startup time is good. Clojure isn't run on the lambda, it runs in the auto scaling group, the lambda is a lightweight rpc proxy. #2018-10-0815:35grounded_sageCould you explain what the rpc proxy is, like the work it is doing? I’m still learning all of this. #2018-10-0815:35grounded_sage@U09LZR36F btw thanks for responding. Was thinking there is little activity in here and would go without a response#2018-10-0815:44dominicmI shouldn't have said rpc. I made that bit up, I meant to say "it acts as an rpc". Basically you can think of it as doing a HTTP request to your auto-scaling group, forwarding on the data that went to the lambda to your asg.#2018-10-0815:44dominicmI'm approximating a little bit here 🙂#2018-10-0703:55Joe Lanechat not or chat bot?#2018-10-0704:44grounded_sageChat bot haha#2018-10-0710:20misha@petrus http://blog.datomic.com/2017/01/the-ten-rules-of-schema-growth.html#2018-10-0719:01Joe LaneIt’s not a big deal. Once a conversation starts the lambda is warm, use one lambda for all bot calls and it will always be warm. #2018-10-0809:36grounded_sage@jarppe I get that. But I’m assuming Ions handles all the Lambda work so you essentially just write Clojure code? I’m not up on all of it. I’m in the front end world#2018-10-0809:36stijnquestion on ions push: it complains that there is a :local/root dependency and hence you have to specify a uname. However, what if this local dependency is in the same git repo, shouldn't that use the git commit then? (we are migrating to ions, but have to keep the existing API available, so we have introduced multiple deps.edn projects in the same git repo)#2018-10-0815:42Lone Rangerlooking for some advice on data modeling best practices. I'm currently using a compound key to track information but my intuition tells me that this is an anti-pattern#2018-10-0815:44Lone RangerObviously it would be ideal if I could omit the :item/key and have the uniqueness of the datoms be predicated on :item/name, :item/category, and :item/subcategory but predictably if I make those :db.unique/identity it's steam-rolling other datums#2018-10-0815:48pvillegas12Have you looked at https://github.com/arohner/datomic-compound-index? It does not solve the problem but sheds light into it#2018-10-0816:08Lone Rangerinteresting. Well at least I'm not the first person to run into this 🙂#2018-10-0816:19marshall@goomba There’s nothing inherently “wrong” with modeling compound uniqueness as a munged-key#2018-10-0816:19marshallyes, it involves redundant data#2018-10-0816:20marshallbut, if you actually require compound uniqueness semantics, then you need to do something like that#2018-10-0816:20Lone Rangeryayyyy okay great.#2018-10-0816:21marshallis the type of :item/key string?#2018-10-0816:21Lone Rangervector#2018-10-0816:21Lone Rangerof keywords#2018-10-0816:21marshallvector is’nt a datomic db.type#2018-10-0816:21marshallah, it’s cardinality many?#2018-10-0816:21marshalli would probably avoid that#2018-10-0816:22marshallin fact, you cant do that#2018-10-0816:22Lone Rangerahh sorry I'm actually putting the hash value of the vector but for omitted that for simplicity#2018-10-0816:22marshall" Only (:db.cardinality/one) attributes can be unique.”#2018-10-0816:22marshallok#2018-10-0816:22Lone Ranger"simplicity"#2018-10-0816:22marshallyeah, a hashed values is probably fine#2018-10-0816:23marshallalthough it does drop one potential advantage#2018-10-0816:23marshallwhich is index locality#2018-10-0816:23marshalli.e. [:a :aa :aaa] will hash very differently (maybe) than [:a :aa :ccc]#2018-10-0816:24marshallbut if you made them something like compound strings, they would sort more ‘realistically’#2018-10-0816:24marshall":a:aa:aaa" and ":a:aa:ccc"#2018-10-0816:25marshallalso human readable#2018-10-0816:25marshallwhich is nice for debugging and/or error handling#2018-10-0816:25Lone Rangerahh good point. yeah I'm at the dev phase where I just throw everything at the wall and see what ticks#2018-10-0816:25Lone Rangerbut that's a better idea#2018-10-0816:26Lone Rangerthank you 🙂#2018-10-0816:26marshallnp#2018-10-0819:28ghaskinshi all, im struggling to find out how to determine the txinstant of the last commit to the db#2018-10-0819:28ghaskinsi can get (basis-t) of course#2018-10-0819:28ghaskinsbut then im not sure how to get t -> txinstant#2018-10-0819:31souenzzo@ghaskins
(defn t->inst
[db t]
(:db/txInstant (d/pull db [:db/txInstant] (d/t->tx t))))
#2018-10-0819:32ghaskinsawesome, thank you @souenzzo#2018-10-0821:49eggsyntaxIs there any way to query and get one or a few results that doesn't require retrieving a large amount of data (assuming a cold peer, in this case my local [on-prem] REPL)? I think (is this correct?) that both the :find ?e . find specification and the sample aggregation function retrieve the complete result set before cutting it down. It's fairly common (for me at least) to want to get a couple of representative matching entities in a fast way, and it seems like there would be some way to achieve that with a query.#2018-10-0821:55Joe Laneasync query that returns a channel? take N results from channel, then close?#2018-10-0821:58eggsyntaxIs there such a thing as an asynchronous query (for on-premise)? I thought queries all went through datomic.api/query, which as far as I know is fundamentally eager.#2018-10-0821:59Joe LaneI’m very unfamiliar with on-prem.#2018-10-0821:59Joe LaneAll I know is the client api.#2018-10-0821:59eggsyntaxWhereas I'm pretty unfamiliar with the client api 😉#2018-10-0822:00eggsyntaxIncidentally, I think I can achieve something like this by hitting the indexes instead of querying. It just seems like there would be a way to do it via query, since (I imagine) it's a common need.#2018-10-0901:03ozWhat's the proper way to backup a Datomic Cloud database? I started down the route of trying to restore from a Dynamo snapshot, create a new storage stack, then update the existing compute stack to use that new storage stack system.#2018-10-0911:28stuarthallowayCloud databases are redundantly stored on multiple storages that are themselves redundant. There is not (currently) a backup/restore as with On-Prem.#2018-10-0913:12oz:+1:#2018-10-0901:04ozHowever it doesn't look like that will work as I was originally thinking.#2018-10-0908:30val_waeselynck@eggsyntax I don't see how you can do that via Datalog, but maybe you can use the index API to that end. Also, have you set up memcached on your local machine ? You can get a lot of speedup with no extra code this way#2018-10-0913:10eggsyntaxYeah, I think indexes end up being the way to go here. Thanks!
I just started on a new team recently, so I haven't tried to set up memcached yet, but definitely planning to. I haven't tried before to put it between my local machine and a remote DB (which is what I'm querying in this case) -- but it seems like that would be possible?#2018-10-0914:26val_waeselynckIt is totally possible - it's just a matter of starting your local REPL with the right JVM option (and the memcached server started somewhere else on your machine)#2018-10-0914:56eggsyntaxThanks Val, I appreciate it 🙂#2018-10-0915:37jocrauIs there a best practice solution for importing data from a large CSV file residing in an S3 bucket into Datomic Cloud? I currently use iota (https://github.com/thebusby/iota) in combination with tesser (https://github.com/aphyr/tesser) to read from a local file and parallel process chunks.#2018-10-0918:54ro6I'm just getting started with tools.deps + Datomic Ions today. Where should I post issues or unintuitive things as I find them so they will be useful to others?#2018-10-0919:17jaretYou can send them to me (<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>) or if you think it would make a good community post, you can share it on our forums. https://forum.datomic.com/#2018-10-0919:33ro6@U1QJACBUM Thanks! I think this first one is more tools.deps specific, where should that go? I saw there's no "issues" section on the GitHub repo#2018-10-0920:10dominicmJira is linked from the contributing.md or readme#2018-10-0921:29jarethttps://forum.datomic.com/t/datomic-0-9-5783-now-available/642#2018-10-0922:52kennyDoes anyone have any examples of running the socks proxy script on CircleCI?#2018-10-1012:59mpingis there a way of retrieving a set of entities grouped by a certain key?#2018-10-1013:00mpingI know I can use a custom aggr function#2018-10-1013:46val_waeselynck@mping Maybe with distinct ? https://docs.datomic.com/on-prem/query.html#aggregates-returning-collections#2018-10-1013:58mpinggonna give it a try#2018-10-1014:29ro6How are people handling middleware with Ions since routing happens at the API Gateway layer?#2018-10-1014:42Joe LaneNothing stops you from using ring middleware with ions. put the middleware around your app inside your call to apigw/ionize#2018-10-1014:47ro6@lanejo01 That's what I was thinking, but then aren't I wrapping each handler individually, even if they all share common middleware?#2018-10-1014:47Joe Lanemake 1 handler#2018-10-1014:47Joe Lanedo your routing in your app#2018-10-1014:48ro6Oh, so just a bare API Gateway proxy on "/", then everything the same as a usual Clojure webapp from there?#2018-10-1014:50Joe Laneit can be, if you want to build it that way.#2018-10-1014:50Joe LaneI havent done it that way so I may be wrong, but that is my understanding.#2018-10-1014:50ro6Well, I don't want to duplicate routing at two layers....#2018-10-1014:54ro6Delegating everything to the app feels like a misuse of API Gateway, but I'm just getting started with this entire stack. I'm basically here asking for best practices, or stories of pain so I can avoid. Maybe those haven't really congealed around Ions yet.#2018-10-1015:09eoliphantHave 'ionized' 3 apps at this point, run every thing through a single API GW for two of them, and the last we have two for a 'hard' separation of user and admin functions. I've been back and forth on it as well, but I'm leaning more towards using APIGW, more or less like ions use lambda, more of an AWS'ey way to 'lift' stuff into the clojure/datomic world as early as possible in a given flow, then just take it from there.#2018-10-1015:13ro6Thanks for the response. I feel much more comfortable drawing from at least some experience rather than making it up as I go.
Just to clarify, you mean two "routes" in APIGW mapped to two Ion handlers?#2018-10-1017:52stijnwe went with one proxy resource on / in apigw too, we're migrating from datomic cloud client on elasticbeanstalk, so this was a question of get it working with the least rework.#2018-10-1017:52stijnmaybe one day, i'll try to separate the routing, but then it still makes sense to do that in clojure and use something like bidi to generate the swagger for api gw#2018-10-1023:48eoliphanthey @U8LN9KT2N yep, two separate ion/lambda handlers, and actually 2 separate endpoints, as opposed to 2 routes in a single endpoint. That was Probably more laziness than anything else lol. But may look at doing the routing in a single APIGW at some point#2018-10-1015:22ro6I'm definitely drawn to the "one APIGW entry proxying to one Ion handler per app" approach since then I can reuse the established Clojure webapp patterns and track all my routing/etc.. in code rather than a separate structure versioned in AWS. I've been second-guessing though since the core team's tutorials (eg https://docs.datomic.com/cloud/ions/ions-tutorial.html#webapp) guide towards "one Gateway entry and Ion handler per operation".#2018-10-1015:22jocrauHow can I grant a lambda ion read access to an S3 object?#2018-10-1015:27Joe LaneCreate a policy in IAM with read access to said S3 object, copy the arn for that policy, then do a parameter upgrade on your datomic cloud stack. At the bottom of the first ( I think) page there is a section that says effectively “attach the arn for the policy you want these nodes to have”. Paste the arn of the policy you created, complete the parameter upgrade, and that should do it.#2018-10-1015:27Joe Lane(Just did this yesterday on the 4th project we have with ions)#2018-10-1015:51jocrauThat worked. Thanks! For reference: My basic misconception was that I added an inline policy to the Lambda execution role. But the Lambda function created on is just a thin layer for invoking the Clojure code inside the Datomic compute node.#2018-10-1015:57jocrau(@stuarthalloway talks about this about 21 minutes into his intro video https://www.youtube.com/watch?v=3BRO-Xb32Ic)#2018-10-1016:07Joe LaneYeah, the docs are correct, however I believe that part is buried in “Operation > Access Control” no where near the rest of the ion tutorial. https://docs.datomic.com/cloud/operation/access-control.html#authorize-ions#2018-10-1113:24adamfreywhen I write a script that's supposed to run and exit and that script connects to a datomic cloud db via the client api, my script always hangs around when it's finished. And I have to kill it with Ctrl-c. The only way I've found to get it to shutdown on its own is by using (System/exit 0), which is pretty extreme.
Even (shutdown-agents) doesn't do anything#2018-10-1113:26adamfreyI wrote a test script that does nothing but connect to datomic cloud and these are the threads that exist when it hangs:
[#object[java.lang.Thread 0x794fbf0d "Thread[async-dispatch-7,5,main]"], #object[java.lang.Thread 0x504f2bcd "Thread[qtp1841195153-13,5,main]"], #object[java.lang.Thread 0x3c7e7ffd "Thread[qtp1841195153-14,5,main]"], #object[java.lang.Thread 0x18c2b4c6 "Thread[async-thread-macro-1,5,main]"], #object[java.lang.Thread 0xd5d9d92 "Thread[async-dispatch-2,5,main]"],
#object[com.amazonaws.http.IdleConnectionReaper 0x347c7b "Thread[java-sdk-http-connection-reaper,5,main]"],
#object[java.lang.Thread 0x372b2573 "Thread[qtp1841195153-18,5,main]"],
#object[java.lang.Thread 0x12bb3666 "Thread[async-dispatch-5,5,main]"],
#object[java.lang.Thread 0xaad1270 "Thread[Signal Dispatcher,9,system]"],
#object[java.lang.Thread 0x126f428e "Thread[async-dispatch-1,5,main]"],
#object[java.lang.Thread 0x41a372c1 "Thread[async-dispatch-6,5,main]"],
#object[java.lang.Thread 0x45d28ab7 "Thread[qtp1841195153-15,5,main]"],
#object[java.lang.Thread 0x3b75fdd0 "Thread[qtp1841195153-12,5,main]"],
#object[java.lang.ref.Finalizer$FinalizerThread 0x7e64b248 "Thread[Finalizer,8,system]"],
#object[java.lang.Thread 0x7e4f5062 "Thread[main,5,main]"],
#object[java.lang.Thread 0x66fd9613 "Thread[qtp1841195153-19,5,main]"],
#object[java.lang.Thread 0x461e9b31 "Thread[async-dispatch-3,5,main]"],
#object[java.lang.Thread 0x78652c15 "Thread[qtp1841195153-17,5,main]"],
#object[java.lang.ref.Reference$ReferenceHandler 0x6e5dc02d "Thread[Reference Handler,10,system]"],
#object[java.lang.Thread 0x1c71d704 "Thread[async-dispatch-4,5,main]"],
#object[java.lang.Thread 0x623e578b "Thread[clojure.core/tap-loop,5,main]"],
#object[java.lang.Thread 0x414a3c7d "Thread[clojure.core.async.timers/timeout-daemon,5,main]"],
#object[java.lang.Thread 0x74efc394 "Thread[async-dispatch-8,5,main]"],
#object[java.lang.Thread 0x201a84e1 "Thread[
#2018-10-1113:27adamfreyis there any way other than System/exit to fix this?#2018-10-1115:59jocrauI am trying to parse and import a 25GB CSV file into Datomic Cloud (prod topology with two i3.large instances). I get “clojure.lang.ExceptionInfo: Busy indexing”. Before I start to implement a retry strategy on the client side, what are the dials and knobs to improve the indexing performance? (I already set :db/noHistory to “true” on all my attributes)#2018-10-1116:22PBIT seems that I cannot bind nil to a var in datomic, resulting in this failing:
(d/q '[:find [?tx ...]
:in ?log ?since ?til
:where [(tx-ids ?log ?since ?til)
[?tx ...]]]
(d/log conn) #inst "2018-10-11T15:53:51.974-00:00" nil)
Exception Unable to find data source: $__in__3 in: ($__in__1 $__in__2 $__in__3) datomic.datalog/eval-rule/fn--5763 (datalog.clj:1450)
While this works:
(d/q '[:find [?tx ...]
:in ?log ?since
:where [(tx-ids ?log ?since nil)
[?tx ...]]]
(d/log conn) #inst "2018-10-11T15:53:51.974-00:00")
[13194187931476 13194187931455 13194187931456]
Why is that?#2018-10-1116:25Joe Lane@jocrau If you look at https://github.com/Datomic/mbrainz-importer you can find examples of how to pipeline async transactions which should yield much higher performance of writes. Are you doing any reads when you’re importing the data or is it pure writes?#2018-10-1116:26Joe LaneYou can look at the cloudwatch dashboard to find different bottlenecks in your system. Sometimes it may be cpu, memory, or DDB allocated write-throughput units.#2018-10-1116:28Joe LaneThe dashboards are very helpful in getting started with perf. That being said, I highly recommend setting up retries on your writes. Things happen and its a good idea to program defensively. Maybe queue things up in kinesis? Thats one approach we took.#2018-10-1117:09kennyI am getting this exception when trying to connect to a DB running the production topology:
(def cust-conn (d/connect client {:db-name "cust-db/591b632f-6c14-4807-af7b-da30929d5791"}))
clojure.lang.ExceptionInfo: Datomic Client Exception
clojure.lang.Compiler$CompilerException: clojure.lang.ExceptionInfo: Datomic Client Exception {:cognitect.anomalies/category :cognitect.anomalies/forbidden, :datomic.client/http-result {:status nil, :headers nil, :body nil}}, compiling:(form-init3645246680023467651.clj:1:16)
The strange thing is that if I try to connect to another DB, it works:
(def admin-conn (d/connect client {:db-name "admin"}))
=> #'dev.system/admin-conn
Any idea what is going on here?#2018-10-1117:10kennyInteresting...
(d/create-database client {:db-name "foo"})
=> true
(d/create-database client {:db-name "foo/bar"})
=> true
(d/connect client {:db-name "foo"})
=> {:db-name "foo", :database-id "55e543d1-14f0-4c9a-b3a9-8fa089a730e9", :t 3, :next-t 4, :type :datomic.client/conn}
(d/connect client {:db-name "foo/bar"})
clojure.lang.ExceptionInfo: Datomic Client Exception
Are db names with a / not allowed??#2018-10-1117:22jocrau@lanejo01 Thanks for your help. I have studied the mbrainz importer. It makes the processing CPU bound by parallelizing it by using pipeline-blocking. I have used that approach in the past but switched to Kyle’s tesser library which works nicely (and I find easier to reason about). The ratio between actual and provisioned write capacity in DynamoDB is healthy. The current problem is that the indexing (which happens asynchronously in the background afaik) can’t keep up with the transaction throughput. And I wonder whether there is a configuration option to tune this in Datomic Cloud.#2018-10-1117:36Joe Lanegot a link to tesser? what does it buy you?#2018-10-1117:38jocrauhttps://github.com/aphyr/tesser#2018-10-1117:39jocrauI use it to execute composed functions in parallel.#2018-10-1117:45favilaare you using tesser in such a way that it propagates backpressure?#2018-10-1117:47favilaI don't know for sure that datomic client acts the same way, but with the datomic peer api as long as you deref your transact somewhere you won't get exceptions. Your application may slow to nothing, but eventually the transactor will catch up#2018-10-1117:48favilatesser is designed for cpu-level parallelism, but transacting with high throughput requires io pipelining with a bounded depth and blocking to receive backpressure#2018-10-1117:49faviladoesn't mean you can't use tesser but there has to be some care about how it is using the transactor#2018-10-1118:46jocrauThe call to the synchronous transact blocks, but in case of a transactor being busy indexing returns an ex-info map right away. The behavior differs from the on-prem client library which returns a future.#2018-10-1118:52jocrauOne way to handle that is to retry the failed transaction. On the other end, I am still trying to reduce the fails by increasing the indexing performance.#2018-10-1118:54favilahttps://docs.datomic.com/cloud/client/client-api.html#busy#2018-10-1118:54favilalooks like that is by design#2018-10-1118:55favilalooks like they want you to do something like this:#2018-10-1118:55favilahttps://github.com/Datomic/mbrainz-importer/blob/master/src/cognitect/xform/batch.clj#L70-L92#2018-10-1217:20jocrauI found https://github.com/BrunoBonacci/safely. It seems to be a great tool to implement a retry strategy.#2018-10-1218:16favilathank you for that link!#2018-10-1121:25grzmAny issues running Clojure 1.10.0-RC1 on Datomic Cloud?#2018-10-1121:56csmI just set up a new datomic cloud instance, can start the socks proxy, but get ExceptionInfo com.amazonaws.services.s3.AmazonS3Client.beforeClientExecution(Lcom/amazonaws/AmazonWebServiceRequest;)Lcom/amazonaws/AmazonWebServiceRequest; clojure.core/ex-info on trying list-databases or create-database#2018-10-1122:25csmaha, had the wrong version of aws-java-sdk-core from another dependency#2018-10-1122:00luchiniThe EC2 instances of my query groups started shutting down and terminating non-stop recently. It seems that the culprit is this:#2018-10-1122:00luchini#############################################################
/dev/fd/11: line 1: /sbin/plymouthd: No such file or directory
initctl: Event failed
#2018-10-1122:00luchiniAnyone else with this problem?#2018-10-1213:24jaret@U4L16CHT9 I might know what’s going on here. Could you copy out the lines above the “#” break line? The entire block delimited by the “#” break line rows.#2018-10-1517:22luchiniThese are the lines I get before the #:
Calculating memory settings
No cache configured
/opt/datomic/export-environment: line 31: {:retry: command not found
#############################################################
DATOMIC EXITING PREMATURELY
Error on or near line 31; exiting with status 1
Environment:
S3_VALS_PATH=primary-storagef7f305e7-35z8d8t26waz-s3datomic-xxjwrytensid/primary/datomic/vals
DATOMIC_INDEX_GROUP=primary
DATOMIC_TX_GROUP=primary
TERM=linux
DATOMIC_APPLICATION_PID_FILE=/opt/datomic/deploy/image/pids/application.pid
DATOMIC_CLUSTER_NODE=true
DDB_CATALOG_TABLE=datomic-primary-catalog
DATOMIC_CODE_DEPLOY_APPLICATION=red-robin
DATOMIC_PRODUCTION_COMPUTE=primary-Compute-N688L1Z9E3CF
JVM_FLAGS=-Dclojure.spec.skip-macros=true -XX:+UseG1GC -XX:MaxGCPauseMillis=50 -XX:MaxDirectMemorySize=256m
DATOMIC_CACHE_GROUP=primary
DISABLE_SSL=true
DATOMIC_HOSTED_ZONE_ID=Z38EPCPQ9NXY6O
DATOMIC_XMX=2582m
PATH=/sbin:/usr/sbin:/bin:/usr/bin
OS_RESERVE_MB=256
EFS_VALS_PATH=datomic/vals
S3_AUTH_PATH=primary-storagef7f305e7-35z8d8t26waz-s3datomic-xxjwrytensid
RUNLEVEL=3
runlevel=3
AWS_DEFAULT_REGION=us-east-1
PWD=/
LANGSH_SOURCED=1
DATOMIC_QUERY_GROUP=sandbox
DATOMIC_ENV_MAP=<REDACTED>
LANG=en_US.UTF-8
KMS_CMK=alias/datomic
FS_VALS_CACHE_PATH=/opt/datomic/efs-mount/datomic/vals
PREVLEVEL=N
previous=N
PCT_JVM_MEM_FOR_HEAP=70
HOST_IP=10.213.21.224
CONSOLETYPE=serial
SHLVL=2
CW_LOG_GROUP=datomic-primary
UPSTART_INSTANCE=
UPSTART_EVENTS=runlevel
EFS_DNS=
DDB_LOG_TABLE=datomic-primary
UPSTART_JOB=rc
S3_CERTS_PATH=primary-storagef7f305e7-35z8d8t26waz-s3datomic-xxjwrytensid/primary/datomic/access/certs
PCT_MEM_FOR_JVM=100
_=/bin/env
#############################################################
#2018-10-1713:17jaretThanks! We’re working on a fix for this.#2018-10-1200:46jarethttps://forum.datomic.com/t/datomic-cloud-441-8505-critical-update/645#2018-10-1209:08stijnto be clear, there's no update of the storage stack required, right?#2018-10-1212:59marshallCorrect#2018-10-1214:34ozIn CFT 441 you switched to YAML, now in 441-8505 you are using json again. It's not a big deal this time, but in the storage stack we have a couple of modification that get around the issue with running in a AWS account that has EC2 classic support. So we have to cherry pick those changes by hand into any storge CF template upgrades. I did this just recently from 297 to 409 and it wasn't too bad, but the version 441 was a bit harder due to the change from json to yaml. Again it's a non-issue for 441-8505 since it's only a compute stack change, but going forward can you please distribute one format or both yaml and json?#2018-10-1214:38marshallthe YAML change was a marketplace artifact; we did not choose that and we intend to use json#2018-10-1214:53oz👌#2018-10-1420:56eoliphantI posted this in the main thread as well. I'm applying this to a solo setup. the compute upgrade worked fine, but I'm getting a Error creating change set: The submitted information didn't contain changes. Submit different information to create a change set. back from CF when I try to apply the update for storage#2018-10-1200:53csmdoes upgrading really require deleting the stack and creating it again?#2018-10-1200:55stuarthallowayHi @U1WMPA45U, it depends on what you mean by "the stack".#2018-10-1200:56stuarthallowayIf you are running in the recommended two stack shape, then no: https://docs.datomic.com/cloud/operation/upgrading.html#compute-only-upgrade#2018-10-1200:57csmI launched the CF template and got three CF stacks in total — so update the stack named “compute”, yes?#2018-10-1200:58stuarthallowayUnfortunately, no. AWS's marketplace rules are in direct conflict with AWS's CloudFormation best practice guidelines. The "deleting" path takes you from Marketplace-land to CF-best-practice-land.#2018-10-1200:58stuarthallowayAfter you do this once, you will be in CF-best-practice-land and never have to do it again.#2018-10-1200:59csmgot it, thanks!#2018-10-1200:59stuarthallowaySee also https://docs.datomic.com/cloud/operation/upgrading.html#why-multiple-stacks#2018-10-1201:56jocrauJust a quick note on the update: I had to re-adjust the capacity settings of the compute stack autoscaling group. The update seems to reset this to “desired 2, min 2, and max 3”.#2018-10-1209:16stijnsame issue here#2018-10-1213:00marshallthose are the default values; had you changed them to something else?#2018-10-1213:01marshallnote: https://docs.datomic.com/cloud/operation/scaling.html#database-scaling#2018-10-1213:01marshallyou should not be using autoscaling on the primary compute group#2018-10-1213:38jocrauI have two use-cases to change autoscaling of the primary compute group: First, to save money while experimenting with the prod topology (I set them to 0 during times I am not working on it), and second to try to increase transaction performance for large imports (that might be a brute force approach, see also https://forum.datomic.com/t/tuning-transactor-memory-in-datomic-cloud/643).#2018-10-1213:40jocrauThe documentation is a bit confusing. The two sentences “If you are writing to a large number of different databases, you can increase the size of the primary compute group by explicitly expanding its Auto Scaling Group.” and “You should not enable AWS Auto Scaling on the primary compute group.” seem to contradict each other. Am I missing something?#2018-10-1213:41marshallThose should not be autoscaling events; Scaling the group will not affect throughput for a single DB#2018-10-1213:41marshallyou can adjust the size of the group explicitly#2018-10-1213:41marshallyou shouldn’t use AutoScaling#2018-10-1213:42marshalli.e. Autoscaling events == things that AWS does for you triggered based on some metric/event#2018-10-1213:43marshallChanging the “min” “max” and “desired” explicitly is OK, but should be a fairly infrequent human-required action#2018-10-1213:58jocrauYou are right that it does not make sense to change the “desired” setting of the autoscaling group to adapt to spikes (that’s what the “auto” in autoscaling is for). But to increase the “max” and “desired” seems to be the best option currently to increase transaction (and indexing) performance in case of a known, large import.#2018-10-1214:38marshallif the import runs against a single database, changing the size of the compute group will not affect throughput#2018-10-1214:43marshallyou can change to using an i3.xlarge (instead of the default i3.large)#2018-10-1214:43marshallin your compute group#2018-10-1214:43marshallthat will improve import perf#2018-10-1214:55jocrauOk. I will give that a try.#2018-10-1215:06jocrauDoes the number of nodes influence indexing performance (on a single database)?#2018-10-1215:12marshallno#2018-10-1215:17stuarthalloway@jocrau @marshall as long as you have at least two, no#2018-10-1507:15stijnour use case is to have a prod setup for our staging environment, but without HA. so we set the 3 values (min, max, desired) to 1.#2018-10-1214:33jocrauA deployment to the compute group seems to be performed sequentially (see attached graph which shows the incoming network traffic to 5 nodes; mostly the JAR files I assume). Can this be done in parallel to speed up the deployment?#2018-10-1214:44marshall@jocrau No, that is specifically the way that rolling deployments work to maintain uptime and enable rollback#2018-10-1214:45jocrau@marshall Makes sense. Thanks.#2018-10-1216:29Lone RangerDoes anyone happen to know where I could find some good marketing materials on the Datomic value prop for non-technical executives?#2018-10-1216:36val_waeselynck@goomba I tried to answer that very question here: https://medium.com/@val.vvalval/what-datomic-brings-to-businesses-e2238a568e1c#2018-10-1216:36val_waeselynckNote that the value prop of Datomic Cloud is a bit different#2018-10-1216:36Lone RangerHa! How serendipitous!#2018-10-1216:37Lone Rangerthat's fine, due to the nature of the data/work it would have to be self-hosted anyway.#2018-10-1217:16csmI don’t need to recreate my VPC endpoint again when I perform my first upgrade, do I?#2018-10-1217:21csmI think I have an answer to that: I can’t delete the datomic cloud root stack since the VPC endpoint stack depends on resources#2018-10-1217:28csmthe “compute” stack just failed to delete for me, with reason The following resource(s) failed to delete: [HostedZone].#2018-10-1217:31csm…which was because I had a record set for my VPC endpoint…#2018-10-1217:44grzmI'm setting up a new query group. The stack created without error. When doing the first deploy to the query group, it's failing ValidateService with ScriptFailed. The events details show [stdout]Received 503 a number of times, and then finally [stdout]WARN: validation did not succeed after two minutes. Ideas on where to start debugging this?#2018-10-1218:11kenny@grzm Check the CloudWatch logs - there was probably an exception.#2018-10-1218:15grzm@kenny cheers. thanks for the kick in the right direction.#2018-10-1317:52idiomancyhey, this is a data*script* so I apologize for asking it here, but there's a knowledge overlap and the datascript channel is deaaad.
is anyone aware of any performance considerations/ best practices for building derived databases in datascript?
For instance,
would it be generally faster to do something like
(db-with (empty-db) (d/q [:find (pull all entities I care about)]))
or to do something with d/filter?#2018-10-1318:00val_waeselynckWell, this depends on how much you read compared to how much you write. How often do you need to create a derived database, and what performance expectations do you have when reading it?#2018-10-1318:00idiomancythis would be for reading only#2018-10-1318:00idiomancyso, a materialized view#2018-10-1318:00idiomancythat can be queried with datalog semantics#2018-10-1318:01idiomancythe ideal optimization would be space complexity, honestly#2018-10-1318:01idiomancyso a better solution would take up less memory than other available equivelant solutions#2018-10-1318:02idiomancyI'm not sure how possible that is for datascript though#2018-10-1318:09val_waeselynckFiltering has essentially no space cost, it just slows down queries a bit#2018-10-1318:10idiomancyoh really!? that's great, so it's sharing the structures from the reference value?#2018-10-1318:12idiomancyohhh#2018-10-1318:12idiomancyI see#2018-10-1318:12idiomancyso its really just querying the same value with an additional predicate#2018-10-1318:12idiomancyfascinating#2018-10-1321:41Daniel HinesI'm trying to follow the Datomic on-prem tutorial in from a cider repl in emacs. When I eval (def client (d/client cfg)), I got the following error#2018-10-1321:42Daniel Hines2. Unhandled clojure.lang.Compiler$CompilerException
Error compiling datomic/client/impl/shared.clj at (349:13)
1. Caused by java.lang.RuntimeException
Unable to resolve symbol: int? in this context
#2018-10-1321:42Daniel HinesAdmittedly, I'm not sure whether this is a beginner, cider, or datomic question.#2018-10-1322:01dpsuttonWhat's your clojure version. That's a 1.9 predicate#2018-10-1322:01dpsuttonAny chance you're on 1.8 from lein new or some other reason?#2018-10-1323:14Daniel HinesThat's exactly what it was! Thanks @dpsutton#2018-10-1409:21avfonarevCan datomic ions handle file upload?#2018-10-1410:21henrikYep, no problem.#2018-10-1420:52eoliphantthough, given that you're in AWS, you have other options. We do all of our uploads to S3, then have a lambda ion handle the s3 notifications#2018-10-1508:09henrikI made a small UI to visualise the output of processing a file, a sort of visual debugger. This I handled by uploading a file directly to the Ion. For production processing, S3 is definitely the way to go.#2018-10-1420:51eoliphantHi, I'm trying to apply the 441-8505 patch. it worked fine for my solo compute stack, but i'm getting a Error creating change set: The submitted information didn't contain changes. Submit different information to create a change set. when I try to apply the update to the storage stack#2018-10-1511:02stuarthallowayHi @U380J7PAQ -- 441-8505 is a compute only update, so there is no change to storage. @U1QJACBUM if "compute only" is shown only in the summary column of the table, we should add it to the text description of each update so one does not have to look in two places.#2018-10-1513:56eoliphantAh, ok great, yeah wasn't quite clear from the release notes#2018-10-1515:33mgrbyteHi. Using datomic on-prem, just upgraded to 0.9.5703, now unable to connect via a repl. Getting connection timed out, but transactor running on the same port mentioned in the logs. (ddb-local) - Anyone seen anything similar?#2018-10-1515:34mgrbyteerror on connection fail is:
CompilerException clojure.lang.ExceptionInfo: Error communicating with HOST localhost on PORT 4334 {:alt-host nil, :peer-version 2, :password "xxxx", :username "xxxx", :port 4334, :host "localhost", :version "0.9.5703", :timestamp 1539617467299, :encrypt-channel false}, compiling:(00209e77b10857cd356c6f8ff55888c36688ab74-init.clj:57:40)
#2018-10-1517:37jaret@U08715BSS what version were you upgrading from? Did you upgrade your transactor or peer first? I am going to look at reproducing.#2018-10-1608:59mgrbyte@U1QJACBUM I was previously running datomic-pro-0.9.5697#2018-10-1609:00mgrbyteThis is just locally for dev atm, with ddb-local.
Stopped everything, ran the transactor.
Bumped version in my deps.edn
Ran repl
then usual require and connect produces the above.
(on-prem)#2018-10-1609:10mgrbytefwiw, I with a fresh ddb-local database and relevant changes to transactor config, have it working.#2018-10-1609:14mgrbyteI've just had another go with the ddb-local database and previous config, and can no longer re-produce :shrug: 😕#2018-10-1613:00jaretThat’s very odd. I am going to keep poking at this. Thanks for the added information and report.#2018-10-1517:24luchini@val_waeselynck do you have any plans on porting datomock to the client library? Is that even possible?#2018-10-1519:00val_waeselynckNot clear to me that the forking abstraction is feasible there due to the potential transience of with'd dbs there. Maybe @U072WS7PE could tell us ? In any case, there's really not much to Datomock's implementation, so if you need it don't be afraid of writing it :)#2018-10-1604:13luchiniIt’s not an urgency for me as of now but definitely something I want to explore sooner rather than later so I’ll keep you posted!#2018-10-1613:27stijnif we would like to automatically push and deploy some of our branches to datomic cloud ions through CodePipeline/CodeBuild. What exact permissions does the codebuild instance profile need for being able to e.g. download the ions dependencies from the datomic maven S3 bucket? Also, I don't see any documentation on what is needed for pushing to codedeploy. Currently everything is happening as an admin user from a dev machine. Or is there a better way to setup CI for your ions?#2018-10-1615:00Joe LaneMy company has the exact same questions as @stijn. We are very interested in hearing about the best practices for CI/CD with Ions. After digging last night I found the top level codepipeline page seems to have my Ions application registered so maybe there is just manual exploration to be done?#2018-10-1615:05jeroenvandijk@stijn Not sure what exactly is needed, but as a first step you could have a permission that is allowed to forward the admin role to codebuild. This will not give the admin permission to the dev machine#2018-10-1615:06jeroenvandijkHere is more background https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html#2018-10-1619:30grzmWe just saw a blip when deploying an ion:
ERROR, :message cryo is not a recognized vendor code (Service: AWSResourceGroupsTaggingAPI; Status Code: 400; Error Code: InvalidParameterException
There's no reference to cryo in our code. We saw it happen from two different remote laptops in two different states (MN and TN). Retrying the same deploy a few minutes later succeeded just fine. Any ideas? (I'm stepping away from my machine for a while, so won't be following up immediately, but happy to do so when I get back.)#2018-10-1619:39Joe Lane@grzm I ran into this last night on a different project, thought it was just a blip.#2018-10-1619:43jaret@grzm can you DM me the full error with request ID#2018-10-1619:44jaretI am going to log a case to AWS since you’ve both seen this. I’d like to see if they can track this down or provide any clues on what is unavailable.#2018-10-1619:46wilkes@grzm I sent @jaret the error message#2018-10-1620:05favilacan/should the same valcache dir be shared by multiple peer processes?#2018-10-1700:35jaret@U09R86PA4 multiple peers each with their own valcache. I’ll look to add that to the docs, but sharing is not supported.#2018-10-1700:45favilaThat’s too bad. Having shared big Valcache on a dev laptop (which is often multiprocess but same small set of remote txors) is the best use case I see. I run memcached for this now; shared Valcache would be much bigger, persist across reboots, and free up the ram now used for memcached#2018-10-1700:46favilaHow do Valcache and memcached interact if both are enabled?#2018-10-1719:42jaret@U09R86PA4 You can’t use Valcache and memcached together. Its one or the other. The tradeoffs are discussed here http://staging.docs.datomic.com/on-prem/valcache.html#vs-memcached#2018-10-1719:43favilaI am aware of the tradeoffs; I didn't realize they were mutually exclusive choices#2018-10-1719:43favilacould this be made clearer also?#2018-10-1719:44jaretYes. I agree. It needs to be made clearer in the docs.#2018-10-1719:47favilathat's also unfortunate, because a transactor can no longer eagerly populate memcached to shield storage from peer cache misses if the peer is using valcache#2018-10-1621:18grzmThanks @wilkes I just sent @jaret one that I got as well.#2018-10-1703:54dfcarpenterbeginner question. I am trying to use datomic free from within a luminus web project. In the repl I can't seem to find the datomic.api namespace. I am using datomic-free "0.9.5561" and have the transactor running using bin/transactor with the sample config#2018-10-1705:46Hadisince datomic schema only describe characteristic of data, im so curious about how "datomic data" can draw relationship between all entities (in purpose of reporting). At least i need something like "design mode" in MySQL so that i could tell what is happening on current entities in datomic. i was wondering to do something like "select distinct entities from db" is it possible ? it would be helpful if i could get a sample from each entities. 😕
For example i want to uniquely retrieve entities with attributes in it like [ {:person/name :person/address} {:school/name :school/personlist} ...] which is based on facts that inserted in datomic#2018-10-1712:12chris_johnson@stijn @lanejo1 - for our builds of Datomic on-prem in CodeBuild, to get the Datomic Pro JARs into the build classpath, we use the documented method for putting your Datomic Maven repository credentials into environment variables, and then we have our company username and pw as SecuredString SSM parameters that get passed into the Environment block of the CloudFormation template that builds the CodeBuild project. I should think that would also work for the Ions dependencies, using $your_favorite_dependency_manager.#2018-10-1714:16Joe LaneThanks Chris, I hadn’t even though of the ion dependency being an issue.#2018-10-1713:48eoliphant@hadi.pranoto it's sometimes a pain to grok, especially if you're coming from say relational dbs, but in datomic there's no db concept of an 'entity definition' or even their relationships at the schema level. entities are just arbitrary bags of attributes, and refs are for lack of a better term 'anonymous'. any structure beyond attribute defs is up to you. That's part of datomic's power. You're able to do what you described in MySQL because a table does provide a fixed 'bag of attributes', the schema has a concept of fkey relationships between tables, etc.
To your question, If you already have data, some of this can be inferred, tools like this one (https://github.com/felixflores/datomic_schema_grapher) do this. You can use it directly or steal some of its code for your use. Other approaches include using naming conventions like your :person/.. examples, additional custom attributes for schema elements, spec, etc#2018-10-1714:42grzm@jaret We haven't been able to successfully push since yesterday afternoon due to the cryo issue. Happy to be available to work with someone to get this figured out. It's put a heavy damper on development.#2018-10-1714:46jaret@grzm can you log a case with AWS from your account? I’ve logged one from ours asking for more information on the error. I’d be happy to provide the case number for your reference, but we need to get AWS support’s input on what is unavailable/invalid.#2018-10-1714:47grzmSure thing. What relevant Datomic issues should we include in the case?#2018-10-1720:39kennyGetting this exception when calling push in my code:
Exception thrown: cryo is not a recognized vendor code (Service: AWSResourceGroupsTaggingAPI; Status Code: 400; Error Code: InvalidParameterException; Request ID: 8db867cb-d24c-11e8-bb2d-59cd3680b29e)
#2018-10-1720:40kennyAnyone seen this before? Not clear what is causing this.#2018-10-1720:42wilkes@kenny We’ve been seeing this as well. Cognitect has a ticket open, and we’ve opened up one as well. This appears to be related: https://forums.aws.amazon.com/thread.jspa?messageID=872875#2018-10-1720:45kenny@wilkes Thanks. Have you tried the workaround the aws forums suggest there?#2018-10-1720:46kennyActually that's probably hidden in the ion-dev code.#2018-10-1720:46wilkes@kenny I haven’t because I think that is buried in the ion push code#2018-10-1720:47wilkesUpside is that it has forced us to think about what we need to facilitate easier local dev 🙂#2018-10-1720:47kennyUgh. This is kinda a big blocker - we can't deploy code. Did the Datomic team say they'd push a release with the workaround?#2018-10-1720:50jaret@kenny are you US-WEST-2?#2018-10-1720:50kennyYes#2018-10-1720:52jaretI am going to add your error to our ticket. we’re waiting for AWS to provide specific instructions for the filtering solution discussed in the forum post.#2018-10-1720:53kennyHave you guys been able to reproduce the exception?#2018-10-1720:55jaretI have not. But I am still working on it. We have 3 separate AWS accounts reporting the error when pushing. One in US-EAST-1#2018-10-1720:57kennyOk.#2018-10-1721:39okocimFWIW, I’m getting this same error in us-east-2. I feel like it’s region-specific at this point.#2018-10-1721:54jaretI just re-created on US-WEST-2. I am going to look at the other regions.#2018-10-1722:58kennyJust tried deploying my Ion code again and it appears to be working now.#2018-10-1813:28stijnI have the following code for an API Gateway web ion:#2018-10-1813:29stijn(defn ring-handler
[req]
(do
(cast/dev {:msg "RequestLog" ::request req})
(handler req)))
(def ion-handler
"API Gateway web service for the FMS API"
(apigw/ionize ring-handler))
#2018-10-1813:40stuarthalloway@U0539NJF7 use cast/event. "NOTE Configuring a destination for cast/dev when running in Datomic Cloud is currently not supported." -- https://docs.datomic.com/cloud/ions/ions-monitoring.html#2018-10-1813:43stijnok, I totally misunderstood that sentence 😄#2018-10-1813:45stijnthanks#2018-10-1813:29stijnalthough the handler generates a response, I cannot see any message with the RequestLog in Cloudwatch#2018-10-1813:30stijnshould I do something special to get these logged?#2018-10-1813:53stijnis it possible that the Content-Length header gets stripped away somewhere between API Gateway - Lambda - ionize? Because I'm definitely sending it, but it doesn't arrive on the ion function as a request header.#2018-10-1814:05jeff.terrellIs this a good place to mention broken links on the Datomic website?#2018-10-1817:17jeff.terrellWell, before I lose track of it, I'll mention it here. The first item under "Getting Started" in the FAQ [1] links here [2] and shows a "page not found" page.
[1] https://www.datomic.com/cloud-faq.html#getting-started
[2] https://docs.datomic.com/cloud/getting-started/get-datomic.html#2018-10-1814:39jeff.terrellIs it true, as this Hacker News comment states, that Datomic Free Edition does not support the client API?
https://news.ycombinator.com/item?id=16169118#2018-10-1817:14jeff.terrellAh, I finally found it, after searching for a while:
> The Datomic Free transactor is limited to 2 simultaneous peers and embedded storage and does not support Datomic Clients.
https://www.datomic.com/get-datomic.html#2018-10-1814:55eraserhdSo, what happens if we excise history from a database? Will the log have a squashed transaction? Will the log not show the history at all?#2018-10-1816:16val_waeselynckBy «excise history», do you mean "excise all retracted datoms"?#2018-10-1816:46stuarthallowayHi @U0ECYL0ET "The resulting index (and all future indexes) will no longer contain the datoms implied by the excision predicate(s). Furthermore, those same datoms will be removed from the transaction log." -- https://docs.datomic.com/on-prem/excision.html#2018-10-1913:58eraserhdThanks! AFAICT from those docs, nothing in excision allows targeting only retracted datoms. Is that an omission in the docs?#2018-10-1817:46jeff.terrellIn the solo topology of Datomic Cloud, I can still create more than one database (i.e. with d/create-database), right?#2018-10-1817:49marshall@jeff.terrell definitely#2018-10-1818:06kennyWhy am I getting this exception trying to transact some schema?
(let [conn (d/connect client {:db-name "foo"})]
(d/transact conn {:tx-data [#:db{:valueType :db.type/instant, :cardinality :db.cardinality/one, :ident :session/last-used-on}]}))
clojure.lang.ExceptionInfo: Value of :db.install/attribute must be in :db.part/db partition, found :session/last-used-on
The db was just created and is completely empty.#2018-10-1818:07kennyI am running Datomic Cloud 441-8505 production topology and com.datomic/client-cloud {:mvn/version "0.8.63"}.#2018-10-1818:12kennyStrangely if I move :db/ident to be the first value in the map, the transaction works:
(let [conn (d/connect client {:db-name "foo"})]
(d/transact conn {:tx-data [#:db{:ident :session/last-used-on :valueType :db.type/instant, :cardinality :db.cardinality/one}]}))
=>
{:db-before {:database-id "ae464fcb-0bc3-48f3-b3a4-3c8e9eff1d5a",
:db-name "foo",
:t 3,
:next-t 4,
:type :datomic.client/db},
:db-after {:database-id "ae464fcb-0bc3-48f3-b3a4-3c8e9eff1d5a",
:db-name "foo",
:t 4,
:next-t 5,
:type :datomic.client/db},
:tx-data [#datom[13194139533316 50 #inst"2018-10-18T18:08:00.642-00:00" 13194139533316 true]
#datom[64 10 :session/last-used-on 13194139533316 true]
#datom[64 40 25 13194139533316 true]
#datom[64 41 35 13194139533316 true]
#datom[0 13 64 13194139533316 true]],
:tempids {}}
Relying on the order of keys in a map seems like a bad practice.#2018-10-1818:12marshall@kenny put the db/ident first#2018-10-1818:12marshallit’s a bug#2018-10-1818:12marshalli’ll pass it along#2018-10-1818:13kennyYuck. Ok thanks.#2018-10-1818:36favilayeah there's some order-dependence gotchas in schema-creation#2018-10-1818:38favilaa much older manifestation of the same thing: https://gist.github.com/favila/785070fc35afb71d46c9#file-restore_datoms-clj-L123-L134#2018-10-1818:38favila#2018-10-1818:38favilalines 123-134#2018-10-1818:39favilathe "install" assertions (which are implicit nowadays) must occur after the constraints they check#2018-10-1820:05jeff.terrellI'm getting the ExceptionInfo Forbidden to read keyfile error when I (d/create-database client {:db-name "test"}). The troubleshooting page gives this solution:
> Ensure that you have sourced the right credentials with all necessary permissions.
Can somebody unpack that a little? I get the notion of AWS credentials, and I have the access key and secret for a user with all IAM permissions in ~/.aws/credentials. So presumably something else is wrong. What specifically does 'sourced the right credentials' mean?#2018-10-1820:05marshallis it in a profile in your creds file?#2018-10-1820:05jeff.terrellIt's in the default profile.#2018-10-1820:05marshallexport AWS_PROFILE=default#2018-10-1820:05marshallin your environment#2018-10-1820:06marshallor whatever os equivalent is ^#2018-10-1820:06jeff.terrellThe environment that's running the datomic-socks-proxy <stack-name> process?#2018-10-1820:06marshallusually default gets grabbed automatically#2018-10-1820:06marshallboth envs#2018-10-1820:06marshallthe one that runs the socks proxy script needs it#2018-10-1820:06marshallbut so does the env that you’re using to connect from#2018-10-1820:07jeff.terrellOK, interesting. I'm launching from cider, and I'm not sure what's in that environment. But that's enough for me to go on, thanks!#2018-10-1820:07marshallyeah, not sure how you configure system envars in cider specifically, although exporting the envars before you start emacs should do it#2018-10-1820:08jeff.terrellBy the way, I don't think I missed that anywhere in the Datomic Cloud setup instructions, nor was it listed on that 'troubleshooting' entry. That might be worth adding for people like me in the future. simple_smile#2018-10-1820:08jeff.terrellAnd/or default the AWS_PROFILE lookup to default.#2018-10-1820:09marshallhttps://docs.datomic.com/cloud/getting-started/connecting.html#access-keys#2018-10-1820:09marshallyou have several optoins#2018-10-1820:09marshalloptions#2018-10-1820:09marshallyou can pass the :profile to the client in the connection map#2018-10-1820:09marshallor you can use system-level stuff#2018-10-1820:10jeff.terrellGotcha, so I did miss that…my bad. simple_smile#2018-10-1820:10marshallnp 🙂#2018-10-1820:24kennyThis section of the Ion docs says https://docs.datomic.com/cloud/ions/ions-tutorial.html#sec-5-3
> - Under your API in the left side of the UI, click on the bottom choice Settings.
> - Choose Add Binary Media Type, and add the */* type, then Save Changes.
Why do I need to do this? Will something not work if I skip this step?#2018-10-1908:19stijnyes, I forgot that, and body content was not properly encoded/decoded.#2018-10-1820:29jeff.terrell@marshall - Sorry, I'm still struggling here. I did export AWS_PROFILE=default and restarted my datomic-socks-proxy process. Then I did the same with my REPL. I can confirm that (System/getenv "AWS_PROFILE") returns "default", but I still get the Forbidden to read keyfile exception. (I also read the docs you linked me to, but didn't see anything amiss.) Is there something else I might be missing?#2018-10-1820:41jeff.terrellAh, figured it out. Apparently the ~/.aws/credentials file can't have comments. When I manually specified a :creds-profile value to the d/client call, that gave me a sufficiently explanatory error message to figure that out. (Or maybe the error was still on d/create-database, I don't remember now.)
The regular aws command tolerates comments in ~/.aws/credentials just fine. Is this a bug?#2018-10-1820:42jeff.terrell(Also, the regular aws command uses the default profile if AWS_PROFILE is not set, which is another way I was surprised at the behavior of Datomic, FWIW.)#2018-10-1821:14marshallDatomic uses the default credentials provider in the java SDK#2018-10-1821:14marshallit’s possible that behaves differently than version(s) of the aws CLI#2018-10-1821:15marshallDatomic never reads your ~/.aws/credentials file directly - that is always done by the AWS SDK#2018-10-1908:03stijnwe're seeing some exceptions during the compilation of an ion deploy. the instance terminates itself in this case and then we can deploy. it looks like it happens every other deploy. Anyone else have seen this?#2018-10-1909:21stijnmaybe, the answer lies in using a different http lib for making requests. the ion-event-example uses cognitect.http-client https://github.com/Datomic/ion-event-example/blob/master/src/datomic/ion/event_example.clj#L27#2018-10-1909:22stijnis there some documentation about that library? and since it's not in the dependencies, I assume it is available on ions by default?#2018-10-1912:37jeroenvandijkAre session credentials support in the dynamodb connection string? E.g.
"datomic:"
I've tried several combination (aws_security_token, session_token, leaving it out). But no luck so far. The error I get is:
com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: The security token included in the request is invalid.
This is telling me that AWS supports it, but I can't tell if Datomic forward this information.#2018-10-1912:55jeroenvandijkCool, found a working thing via system properties: (defn with-system-properties [props f]
(let [p (System/getProperties)]
(try
(doseq [[k v] props]
(System/setProperty k v))
(f)
(finally
(System/setProperties p)))))
(with-system-properties
{"aws.accessKeyId" (.getAWSAccessKeyId credentials)
"aws.secretKey" (.getAWSSecretKey credentials)
"aws.sessionToken" (.getSessionToken credentials)}
(d/connect uri))#2018-10-1912:41ChrisCan anyone advise if it's more performant to use Java method calls in a :where clause or Clojure? Many examples I see online use Java but it's not clear what the reason is.
e.g. [:find ?e :where [?e :person/name ?n] [(.startsWith ^String ?n "Jo")]] vs [:find ?e :where [?e :person/name ?n] [(clojure.string/starts-with ?n "Jo")]]#2018-10-1912:43ChrisOr is it just for brevity because a function outside clojure.core needs to be fully qualified?#2018-10-1912:58jeroenvandijk@cbowdon clojure.string/starts-with has been added in clojure 1.8 [see 1]. So these are probably old examples [1] https://clojuredocs.org/clojure.string/starts-with_q#2018-10-1913:00Chris@jeroenvandijk Ah that makes sense, thank you#2018-10-1915:30jeff.terrellIf anybody has figured out how to do isolated dev environments with Datomic Cloud in a way that you're satisfied with, I'd be interested in your thoughts here:
https://forum.datomic.com/t/any-ideas-for-isolated-dev-environments-with-datomic-cloud/663#2018-10-1915:38marshall@jeff.terrell Have you looked at https://docs.datomic.com/cloud/operation/planning.html#2018-10-1915:40jeff.terrellYes, and I'm not sure that helps, but is there something in particular you're thinking about? Query groups maybe?#2018-10-1915:42jeff.terrellAnd also, are query groups only for the production topology? I couldn't figure that out from the docs (although maybe I'm just missing it).#2018-10-1915:44marshallYes, query groups can help with some of that. Yes, production only.#2018-10-1915:46jeff.terrellOK. I'm thinking that's more than I can afford at the moment. I don't suppose y'all have any plans to include the client API in Datomic Free Edition, do you?#2018-10-1915:51jeff.terrellAlthough, now that I'm checking the price (using the estimator on AWS marketplace), it looks like it might be as little as about $1.50/day all-in for a production topology with query groups. As a sanity check, does that sound realistic to you? (Maybe it matters that I'm still in the free tier on this account?)#2018-10-1915:53marshallI think it’d be a bit more than that
IIRC a “default” production topology with 2 compute nodes runs around $400 or so a month#2018-10-1915:53marshallinfrastructure + software#2018-10-1915:54jeff.terrellOK. Thanks for the sanity check. Not sure how I was estimating that so wrongly. simple_smile Thanks for all the help, by the way.#2018-10-1915:54marshallabsolutely#2018-10-1915:56ro6I guess Ions users are hanging out here more than #ions-aws? Doesn't seem to be much activity there.#2018-10-1915:57ro6Is there a way to specify to Ions that you want a certain set of tools.deps aliases to be used when you push/deploy? I guess by default the JVM process+classpath is constructed from the top level deps.edn specification without any aliases merged in?#2018-10-1916:45grzmIn Ions, is there a way to piggyback custom code on the validate step during deploy? For example, confirming that the equivalent of -main started without error?#2018-10-1918:52ro6Second. I'm wondering about the JVM init process in general with Ions.#2018-10-1918:55ro6For example, if I want to set a global uncaught exception handler (as recommended here: https://stuartsierra.com/2015/05/27/clojure-uncaught-exceptions), which I'd normally do once in -main, what's the best place to do that in Ions?#2018-10-2014:09luchiniI’ve been using the very entry-point function to do all sorts of global system setup and, wherever possible, memoizing things along the lines of the Datomic sample app.#2018-10-2014:09luchiniI’m not pretty sure I like this approach so I’m monitoring whether it scales well.#2018-10-1917:32jeff.terrellI notice that the latest version of com.datomic/client-cloud on Maven is v0.8.66 [1], but that version is not listed on the releases page of Datomic Cloud [2]. Is v0.8.66 not an officially supported version? Asking because I encountered a problem with it [3] (which may not be its fault; I dunno).
[1] https://search.maven.org/artifact/com.datomic/client-cloud/0.8.66/jar
[2] https://docs.datomic.com/cloud/releases.html#current
[3] https://github.com/ComputeSoftware/datomic-client-memdb/issues/2#2018-10-1917:54preDoes Datomic support SQL Server? The documentation includes three other sql databases.#2018-10-1919:34joshkha few hours ago our applications running on AWS lost their connection to datomic cloud, as did my ability to connect locally via the socks proxy. is there an easy way to debug this? we're getting the following error:
{:cognitect.anomalies/category :cognitect.anomalies/unavailable, :cognitect.anomalies/message "Total timeout elapsed",...
the docs describe this as a likely configuration error but nothing has changed locally or internal to the VPC.#2018-10-1919:37joshkh(or the following.. but i'm not sure how to recover or why it killed our silo'ed applications https://docs.datomic.com/cloud/transactions/transaction-processing.html#timeouts)#2018-10-1921:35ro6Unable to deploy: $ clojure -Adev -m datomic.ion.dev "{:op :push :uname \"jvm-init-test-1\"}"
{:command-failed "{:op :push :uname \"jvm-init-test-1\"}",
:causes
({:message "Map literal must contain an even number of forms",
:class RuntimeException})}
#2018-10-1921:36ro6same error with $ clojure -Adev -m datomic.ion.dev '{:op :push :uname "jvm-init-test-1"}'#2018-10-1921:37Joe Lanetry without dashes in the uname OR remove the double quotes and try it as a symbol.#2018-10-1921:37Joe LaneLet me know how it goes @robert.mather.rmm#2018-10-1922:01ro6Not so good...#2018-10-1921:41ro6no dashes: "{:op :push :uname \"jvminittest1\"}" -> same fail
as keyword: {:command-failed "{:op :push :uname :jvminittest1}",
:causes
({:message "Incorrect args for :push",
:class ExceptionInfo,
:data
#:clojure.spec.alpha{:problems
({:path [:uname],
:pred datomic.ion.dev.specs/name?,
:val :jvminittest1,
:via
[:datomic.ion.dev.specs/push-args
:datomic.ion.dev.specs/uname],
:in [:uname]}),
:spec :datomic.ion.dev.specs/push-args,
:value {:op :push, :uname :jvminittest1}}})}
-> Spec fail, probably the keyword in that position doesn't satisfy datomic.ion.dev.specs/name?#2018-10-1921:42ro6Maybe it's the way my shell is escaping the string?#2018-10-1921:44ro6sorry, you said symbol#2018-10-1922:00ro6as symbol: "{:op :push :uname jvm-init-test-1}" -> same fail (odd number of forms)
as symbol without dashes: "{:op :push :uname jvminittest1}" -> same fail#2018-10-1922:04ro6@lanejo01 If I want to escalate this a bit more, do you think the dev forum or the Cognitect support case system is better?#2018-10-1922:06Joe Laneclojure -A:dev -m datomic.ion.dev '{:op :push :uname "some-uname"}'#2018-10-1922:06Joe LaneWhen I push an Ion, it looks like this#2018-10-1922:07Joe LaneNote the single quotes and how i’m not escaping the double quotes.#2018-10-1922:07Joe LaneDoes this also not work for you? Because this works for me several times per day.#2018-10-1922:09ro6yep, that was my first attempt. I think it may be a shell issue. I'm running Bash on Debian on the Windows Subsystem for Linux (WSL)#2018-10-1922:10Joe LaneWeird, because the first one you posted here contains escaped double quotes.#2018-10-1922:11ro6Yeah, I had tried a few by that time#2018-10-1922:11Joe Lanewhat if you commit a WIP then push?#2018-10-1922:17ro6ok, now it's a for real problem: $ clojure -A:dev -m datomic.ion.dev '{:op :push}'
{:command-failed "{:op :push}",
:causes
({:message "Map literal must contain an even number of forms",
:class RuntimeException})}
#2018-10-1922:18ro6@lanejo01 What is your com.datomic/ion-dev version?#2018-10-1922:20Joe LaneSomething tells me the issue isn’t in the library. 0.9.176 is the version.#2018-10-1922:20ro6I'm on "0.9.176" as well, which is also the one Stu used in the event example#2018-10-1922:20ro6I agree, just covering the bases#2018-10-1922:21ro6It's weird though, because I have pushed successfully before from this shell#2018-10-1922:22Joe LaneWhat were the last 5 things you did before trying to push? Can you eval them in the repl and confirm they dont have typos?#2018-10-1922:23Joe LaneDo you have a typo in your ion-config?#2018-10-1922:23ro6haha, yes I do.#2018-10-1922:24ro6@lanejo01 Thank you sir.#2018-10-1922:25ro6Error message definitely could have pointed a bit better, but I still feel stupid...#2018-10-1922:26Joe LaneDon’t feel stupid, happens to all of us.#2018-10-2000:02ro6@stuarthalloway Looks like a typo: https://github.com/Datomic/ion-event-example/blob/master/src/datomic/ion/event_example.clj#L48
"/datomic-shard/" should be "/datomic-shared/"#2018-10-2201:29jaretThanks for the report, I’ve corrected the typo.#2018-10-2014:06joshkhdoes anyone have experience running through the "first upgrade" of a datomic cloud formation? our datomic cloud instance crashed yesterday, and in an attempt to revive it i've started an upgrade process, but it fails when deleting the existing stack.#2018-10-2018:40eoliphantyeah we've done quite a few at this point. we've ~10 solo's (1/dev) as well as 3-4 production sized systems and we generally apply the new revs as they come out. So far not too many problems.
@joshkh do you have the specfic thing that failed ? I ran into an issue once, where a delete failed because of the additional policies i'd added to the IAM role, I had to manually remove those in order for it to succeed.#2018-10-2123:26joshkhthanks for the confirmation. my road blocks were some vpcs, subnets, gateways etc. deleting the old stack left a lot of configuration hanging around, however upgrading to separate compute/storage stacks got my cloud instance back up and running.#2018-10-2117:38eoliphantok.. I have a pretty weird problem. I have some ion code that runs a query and i'm getting different behavior on the server vs running the same code locally via a repl. I've done all sorts of testing, dumped my params map to a string, edn parsed it and re-run the same code in the repl, dumped the param types to make sure that nothing weird was happening there, but so far, again, calling the exact same function (which just calls a query and flattens the result) with the exact same parameters is behaving differently on the server (incorrectly) vs the client
Ok, I think this might be a bug. I noticed I had some other code with virtually the same query, that wasn't exhibiting the same behavior. Ultimately the only difference between the two was that the one that is misbehaving was only returning a bound entity id, where the other was using it in a pull such that basically
; returns seq of ids on client, empty seq on server for exact same :where, params, etc
(d/q '[:find ?o ...
; as expected identical behavior on client and server
(d/q '[:find (pull ?o [...])...
; working around the first with something like following works fine
(->>
(d/q '[:find (pull ?o [:db/id])...
flatten
(map :db/id)
#2018-10-2317:55eoliphant@U1QJACBUM did you guys see this?#2018-10-2318:35jaret@U380J7PAQ Hey! Just saw this. I am not sure I am following you on the use of “client” and “server” here. Are you saying that you are noticing a different behavior between invoking a query on Ions and a local repl? Or, are you referring to a difference between on-prem/peer-server and client? Do you have a full gist showing what you’re running and where?#2018-10-2318:36eoliphantsorry I wasn't clear. Yes, that's exactly it. Seeing this behavior in local REPL, tunneled to server, vs on the server itself.#2018-10-2318:37jaretOk so if I write some queries as above and invoke them with query and connected directly to the stack via REPL I should see different behavior?#2018-10-2318:38jaretIf that’s the case, I’ll go reproduce and figure out what’s going on here.#2018-10-2318:38eoliphantyeah that's what I was seeing. Here's the actual full code
(defn get-all-results
[db {:keys [run/number primer-pair/pair-id sample/extid customer/cust-id]
:as params}]
(cast/event {:msg "get-all-results"
::params [number pair-id extid cust-id]
})
(->>
(d/q '[:find #_?o (pull ?o [:db/id])
:in $ ?run-num ?pp ?samp-id ?cust-id
:where
[?r :run/number ?run-num]
[?p :primer-pair/pair-id ?pp]
[?s :sample/extid ?samp-id]
[?u :customer/samples ?s]
[?u :customer/cust-id ?cust-id]
[?o :otuabund/run ?r]
[?o :otuabund/primer-pair ?p]
[?o :otuabund/sample ?s]
]
db number pair-id extid cust-id)
flatten
(map :db/id))
)
#2018-10-2318:38eoliphantwith my 'fix'#2018-10-2318:39eoliphantit's super weird#2018-10-2318:39jaretRoger! I’ll go digging in.#2018-10-2318:39eoliphantbut uncomment the ?o comment the pull, take out the map i see the behavior#2018-10-2318:39eoliphanti ran a ton of tests#2018-10-2318:40eoliphantcreated a quick wrapper ion#2018-10-2318:40eoliphantso I could test it from the lambda console#2018-10-2318:40eoliphantetc#2018-10-2318:41eoliphantbecause it was being called from an API and what have you. so wanted to get that out of the loop. But once I got it stripped down, the pull version worked, if I pushed the one that just returned the id it failed#2018-10-2123:14joshkhtotally inconsequential because it's not used, but there's a typo in the Datomic/ion-starter example. https://github.com/Datomic/ion-starter/blame/master/src/datomic/ion/starter.clj#L97 "contect" should be "context"? I tried to create a PR but i don't think they're accepted upstream so dropping a note here. 🙂#2018-10-2201:26jaretThanks for report/catch. I’ve fixed it.#2018-10-2209:09joshkh:+1:#2018-10-2210:07mkvlrthere hasn’t been any progress on https://groups.google.com/forum/#!topic/datomic/kOBvvc228VM has there? We’d also like to vote for getting more info on the exception. We’d like to know which attribute(s) cause the db.error/unique-conflict without having to parse the string…#2018-10-2219:42joshkhany ideas? java10 on a fresh ec2 instance: Caused by: java.lang.IllegalArgumentException: Can't define method not in interfaces: recent_db, compiling:(datomic/client/impl/shared.clj:304:1)#2018-10-2220:00eoliphantI think I've seen some other weirdness mentioned with j10. We're moving all of our stuff to cloud, but still using 8 for on-prem#2018-10-2220:00eoliphantah wait is that client code ?#2018-10-2220:14joshkhyup! client code. my project compiles fine on my local machine (with the same jvm), but now i have a hunch that i've juggled quite a few deps via lein install and maybe can't reproduce their order on a remote machine. yikes.#2018-10-2220:17joshkhwhen in doubt, uberjar? :man-shrugging:#2018-10-2221:57donaldballIt’s my observation that d/release on a memory connection actually deletes the underlying database. This is reasonable behavior, but the documentation doesn’t suggest it’s intended. Can anyone on the datomic product team clarify by any chance?#2018-10-2315:35grzmJust throwing this out there: is anyone successfully using datomic.ions.cast/initialize-redirect with an Emacs Cider repl? I've been getting stackoverflow errors due to reflection, and before digging further would like to know if it's a known issue or something no one else has been using, or just my own jacked configuration.#2018-10-2317:57eoliphant@grzm only used it with Cursive/IDEA works fine there#2018-10-2318:14grzm@eoliphant thanks for the confirmation. A coworker has been successfully using it with Cursive as well, and I'm currently getting myself reacquainted with it specifically because of this issue.#2018-10-2322:34steveb8n@grzm those stackoverflow errors are typically seen in Ion deploys due to lack of JVM memory in Solo topologies. Sometimes it goes away with repeated deploys. Otherwise you’ll need to upgrade and edit the CF template to raise the JVM memory limit#2018-10-2403:00ro6Is anyone taking steps to recover the Java logging that Ions throws away by default (eg below WARN level)? (see: https://docs.datomic.com/cloud/ions/ions-monitoring.html#java-logging)#2018-10-2404:57steveb8n@robert.mather.rmm not yet but I’m super interested in whatever solutions anyone is using for operations/monitoring of Ions apps.#2018-10-2410:13joshkhquick ions question. are code deploys supposed to have zero downtime? i'm seeing the following behaviour: 1. deploy a revision, 2. test the lambda and it fails, 3. wait a little bit, try again, and it works:
; A few seconds after an Ions deploy
$ aws lambda invoke --function-name my-compute-group-testfn /dev/stdout
{
"isBase64Encoded" : false,
"statusCode" : 500,
"headers" : {},
"body" : "java.io.IOException: Premature EOS, presumed disconnect"
}{
"StatusCode": 200,
"ExecutedVersion": "$LATEST"
}
; And then a few more second later
$ aws lambda invoke --function-name my-compute-group-testfn /dev/stdout
{"statusCode":200,"headers":{"Content-Type":"application\/edn"},"body":"(some working result)","isBase64Encoded":true}{
"StatusCode": 200,
"ExecutedVersion": "$LATEST"
}
#2018-10-2420:45ro6Solo topology or production?#2018-10-2507:58stijnI'm seeing this too, both solo and production topology#2018-10-2508:00stijn(although we have only tested with production topology without HA). I can report about full production topology later. But it seems like this only happens on the first request.#2018-10-2410:33joshkhalso (unrelated) - how might one go about applying ring middleware to functions that have been ionized with datomic.ion.lambda.api-gateway/ionize? for example, i'd like to use ring.middleware.format.#2018-10-2410:52joshkhfigured out that i have to wrap each function individually before ionizing it. :+1:#2018-10-2410:41grzm@steveb8n this is a stackoverflow error in Cider, not on a deployed machine. datomic.ions.cast/initialize-redirect sends cast/event, cast/dev and cast/alert to somewhere other than Cloudwatch for local development. (https://docs.datomic.com/cloud/ions/ions-monitoring.html#local-workflow)#2018-10-2410:42steveb8nAh ok, my mistake#2018-10-2413:35joshkhi think there might be a bug in the ion/get-params fn. it's dropping the first letter of keys that are defined at the root level:
$ aws ssm put-parameter --name rootlevelparam --type String --value "somevalue"
{
"Version": 1
}
(ion/get-params {:path "/"})
=> {"ootlevelparam" "somevalue"}
#2018-10-2414:29joshkhspeaking of which, should lambdas created via ions have access to SSM by default, presumably as part of their generated role? i'm getting the following error User: arn:aws:sts::accid:assumed-role/my-compute-and-region/someid is not authorized to perform: ssm:GetParametersByPath on resource: arn:aws:ssm:my-region:accid:parameter/location/here/ (Service: AWSSimpleSystemsManagement; Status Code: 400; Error Code: AccessDeniedException; Request ID: some-uuid#2018-10-2415:24amarHi. Has anyone come across this error before? Seems to happen if I am transacting using a transaction function.
java.lang.RuntimeException: Reader tag must be a symbol
File "NativeConstructorAccessorImpl.java", in sun.reflect/newInstance0
File "NativeConstructorAccessorImpl.java", line 62, in sun.reflect/newInstance
File "DelegatingConstructorAccessorImpl.java", line 45, in sun.reflect/newInstance
File "Constructor.java", line 423, in java.lang.reflect/newInstance
File "Reflector.java", line 180, in clojure.lang/invokeConstructor
File "form-init513985284846454305.clj", line 1, in user/[fn]
File "error.clj", line 135, in datomic.error/deserialize-exception
File "error.clj", line 117, in datomic.error/deserialize-exception
File "peer.clj", line 399, in datomic.peer.Connection/datomic.peer.Connection
File "connector.clj", line 169, in datomic.connector/[fn]
File "connector.clj", line 167, in datomic.connector/[fn]
File "MultiFn.java", line 233, in clojure.lang/invoke
File "connector.clj", line 194, in datomic.connector/[fn]
File "connector.clj", line 189, in datomic.connector/[fn]
File "connector.clj", line 187, in datomic.connector/[fn]
File "core.clj", line 2022, in clojure.core/[fn]
(f))
File "AFn.java", line 18, in clojure.lang/call
File "FutureTask.java", line 266, in java.util.concurrent/run
File "ThreadPoolExecutor.java", line 1149, in java.util.concurrent/runWorker
File "ThreadPoolExecutor.java", line 624, in java.util.concurrent/run
File "Thread.java", line 748, in java.lang/run
#2018-10-2421:31amarFor posterity, it seems the issue was related to using destructuring with namespaced keys. The transaction function had something like
(let [{:person/keys [age name]} data] ,,,)
which gets stored in datomic as
(let[#:person{:keys[age name]} data] ,,,)
changing to
(let [age (:person/age data) name (:person/name data)] ,,,)
was one fix. The root cause was an old version of tools.reader and/or another dependency. Upgrading dependencies independent of the code change resolves the issue.#2018-10-2418:16kennyI see the latest version of client-cloud is 0.8.66 on Maven (https://search.maven.org/search?q=g:com.datomic%20AND%20a:client-cloud&core=gav), but the Datomic releases page (https://docs.datomic.com/cloud/releases.html#current) says the current release of client-cloud is 0.8.63. Which is correct?#2018-10-2418:20jaret@U083D6HK9 the release page is correct. I’ll confirm with the team if we need to update to .66#2018-10-2419:33csmSo I created my first ion to hook up to API gateway, and everything works, except my HTTP response bodies are turning into base-64. Is there an obvious thing I messed up that would cause that?#2018-10-2420:49ro6Nope. As long as you set the binary encoding option in API Gateway to */* (under Deployment in the tutorial), you should get a normal response as you'd expect from the outside (eg using Curl). I guess the Gateway does the translation for you, but not from within the testing UI. Everyone seems to run into this! (such as: https://forum.datomic.com/t/base-64-encoded-response-body-in-api-gateway-method-tester/560)#2018-10-2421:22csmok, I think my problem was I deployed the API before setting the binary content type. A re-deploy seemed to fix that#2018-10-2420:41pvillegas12Is there any good example of updating a to-many relationship as one adds objects over time? Suppose I have a datom model that has a to-many ref rules. I’ll be adding rules over time, is the solution to this transact a new datom as
(d/transact conn {:tx-data [
{:model/name "name" :model/rules (conj past-rules {:rule/name "new-rule"})
]})
#2018-10-2500:41favilaThe map syntax is sugar for [:db/add ...]#2018-10-2500:42favilaIt’s not a “merge” or “reset”#2018-10-2500:42favilaThe conj is therefore unnecessary#2018-10-2501:01pvillegas12How would it be with :db/add?#2018-10-2501:04pvillegas12@U09R86PA4 would it be something like [db/add model-id :model/rules rule-id]?#2018-10-2501:05favilaYes#2018-10-2501:05favilaYou can still use the map syntax just understand what it is doing#2018-10-2501:07favila{:db/id a :many-ref [b]} expands to a single db/add, never ever any db/retract#2018-10-2501:07favilaThere is no map syntax for retraction#2018-10-2501:20pvillegas12Yeah, thanks! My conceptual problem was the ability to add a datom to a many-ref with a single datom on the many side#2018-10-2420:43matthaveneranyone used this library? https://github.com/RallySoftware/datomic-replication#2018-10-2421:32csmcan I have multiple ion “projects” per compute stack, or am I limited to one? That is, only one resources/datomic/ion-config.edn? I understand I can split things up into deps, I just want to understand the deployment strategy#2018-10-2600:54eoliphantAFAIK query groups are the only 'unit' of separation in a given system#2018-10-2422:00csmalso, I frequently get 502 errors on what looks like lambda cold starts (https://forum.datomic.com/t/api-gateway-internal-server-error/678)#2018-10-2600:56eoliphantwe've had some success by just pointing CW scheduled events at them#2018-10-2422:07kenny@csm I have also ran into that. I don't have a solution.#2018-10-2423:33luchiniWe are facing a fascinating problem with Datomic Ions. As our code base grew during the last couple of weeks, deploying to Datomic Ions started failing (CodeDeploy itself gives up and rolls back a previous version to the instance).
We initially thought it could be a problem in our setup (we were still on 441) so we updated to 441-8505 but the same behavior kept plaguing us.
After spending a considerable amount of time investigating, we found two ways to “solve” the issue but neither seems reasonable enough:
1) Hack Datomic’s EC2 instances and bump the stack size up of the JVM process
2) Keep the code base much simpler than we would need to 🙂#2018-10-2423:33luchiniNow that I come to think of it, it’s more like one solution 😄#2018-10-2423:35luchiniThe indication that increasing the stack would work was in the exception that was thrown by starting up datomic.cluster-node was a a stack overflow one.#2018-10-2423:36luchiniI wonder the reasoning behind keeping the stack at 256K. I’m pretty sure @marshall has a superb reason for it.#2018-10-2423:37marshalli have no such thing 😉#2018-10-2423:37marshallWe’re aware of the stack size issue#2018-10-2423:37marshallit was configured the way it was to make solo run well on a very small instance#2018-10-2423:38marshallwe’re working on general options all around, but editing the CFT to set the stack size higher is a reasonable option for now#2018-10-2423:38luchiniWe’ve got production topology and dedicated query groups… 🙂#2018-12-0420:02grzmWhen using the Datomic cloud client, (d/db ,,,) returns a :database-id value as well as the db-name. This :database-id value isn’t returned by (d/db ,,,) when running in the cloud AFAICT. Can someone confirm this? I’m looking to get a globally unique, consistent value for the database across connections. Alternative ideas welcome.#2018-12-0420:08eraserhdYou mean, two databases are equal when they have equivalent facts?#2018-12-0420:09eraserhdI've used db values themselves as keys, although there could be two equivalent databases.#2018-12-0420:10eraserhdAnd just in case, since I'm trying to get my company to open source this library, what are you doing?#2018-12-0420:12grzmNope, making sure I’m connected to the same database I thought I was before. We churn through databases (and Datomic Cloud stacks, for that matter) and I want to be able to confirm I’m connected to the same database I was before.#2018-12-0421:15matthavener@andreas862 we namespace all the ui artifact keys with :ui/, and then we just use clojure.walk to remove all the keys that (= "ui" (namespace k))#2018-12-0510:37Andreas LiljeqvistThanks, I think that will be the simplest solution#2018-12-0421:16matthavenerfwiw, you could also build a list of all idents the db knows about and and then walk the transaction data to remove anything invalid#2018-12-0511:36hanswhi all#2018-12-0511:39hanswI have a query that looks like this:
[:find (pull ?e [*])
:in $ [?veh ...]
:where
[?e :vehicle/plate ?veh]]
This returns one value, where multiple rows are squashed together into one map, which I don't understand.#2018-12-0511:44souenzzo:find [(pull ?e [*]) ...] works?#2018-12-0511:45hanswyeah#2018-12-0511:45hanswbut it's squashing multiple rows into one map#2018-12-0511:46hanswso the result shape is [[{}]]#2018-12-0511:46hanswwhere there are multiple rows inside the {}#2018-12-0511:47hansweg.
{:vehicle/a "foo" :vehicle/b "bar" :vehicle/a "more" :vehicle/b "other"}#2018-12-0511:48hanswso I can't distinguish 'rows' from each other#2018-12-0511:50hanswWhat I need is:
[{:vehicle/a "foo" :vehicle/b},
{:vehicle/a "more" :vehicle/b "other"}
]
#2018-12-0511:53hanswi think it broke after upgrading my datomic client...#2018-12-0511:53souenzzowell. let's wait someone from datomic team 🙂#2018-12-0513:51hanswmaybe there were subtle changes when the client-api changed from datomic.client to datomic.client.api#2018-12-0513:52hanswin any case the q function changed from requiring a conn parameter#2018-12-0513:53hanswto just the argmap#2018-12-0513:57thegeezCould you paste the output you are seeing? Your first example is not valid clojure with the same :vehicle/a key appearing multiple times in the same map#2018-12-0513:58hanswi made it shorter for brevity sake#2018-12-0513:58hanswone moment#2018-12-0513:59hanswooh wait yes#2018-12-0513:59hanswhaha#2018-12-0514:01hanswit seems that i was in desperate need of coffee when i pasted this#2018-12-0514:16hanswok, this was another case of confessional debugging.#2018-12-0514:22thegeezRemote rubber ducking works 🙂#2018-12-0514:26hanswthnx for listening 🙂#2018-12-0516:08PBHey there. Last night I was trying to stand up a pedestal service on IONS.
I have been following: https://github.com/pedestal/pedestal-ions-sample
However, when trying to deploy, as such:
clojure -Adev -m datomic.ion.dev '{:op :deploy, :group "ion-pet-Compute-1Q6752A2P837M", :uname "pet-service-sample"}'
{:execution-arn
arn:aws:states:us-east-2:272695641059:execution:datomic-ion-pet-Compute-1Q6752A2P837M:ion-pet-Compute-1Q6752A2P837M-pet-service-sample-1543987389875,
:status-command
"clojure -Adev -m datomic.ion.dev '{:op :deploy-status, :execution-arn arn:aws:states:us-east-2:272695641059:execution:datomic-ion-pet-Compute-1Q6752A2P837M:ion-pet-Compute-1Q6752A2P837M-pet-service-sample-1543987389875}'",
:doc
"To check the status of your deployment, issue the :status-command."}
I get the following
clojure -Adev -m datomic.ion.dev '{:op :deploy-status, :execution-arn arn:aws:states:us-east-2:272695641059:execution:datomic-ion-pet-Compute-1Q6752A2P837M:ion-pet-Compute-1Q6752A2P837M-pet-service-sample-1543987389875}'
{:deploy-status "FAILED", :code-deploy-status "FAILED"}
Looking at the logs it's clear that I have given the wrong :deployment-group:
2018-12-05T05:23:20.008Z 059ddc53-7801-4f4c-b688-60918314b781 DeploymentGroupDoesNotExistException: No Deployment Group found for name: ion-pet-Compute-1Q6752A2P837M
at Request.extractError (/var/runtime/node_modules/aws-sdk/lib/protocol/json.js:48:27)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:683:14)
at Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:38:9)
at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:685:12)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
My question is where do I find that? I copied it direclty from the cloudformation name. Is that incorrect?#2018-12-0516:14Joe LaneWhat version of clojure are you using?#2018-12-0516:15PB1.9.0#2018-12-0516:15Joe LaneAlso, have you deployed the datomic cloud system correctly, but can you confirm it by connecting to that system before this push?#2018-12-0516:16PBI believe I have deployed it correctly. I verified that by testing following the getting started docs#2018-12-0516:18Joe LaneWhen I deploy something I dont have quotes around the group.
clojure -Adev -m datomic.ion.dev '{:op :deploy, :group myproject-dev-compute, :rev "2fee444890ccf58d4629294f3904bf1c38bb762q"}'#2018-12-0516:19Joe LaneI’ve gotta run but I hope thats helpful.#2018-12-0516:24PBThanks for the input. My group needs to have the random identifier as such or it can't find the group as such ion-pet-Compute-1Q6752A2P837M or it will not do anything#2018-12-0516:25PBI still get the error: 2018-12-05T05:23:20.008Z 059ddc53-7801-4f4c-b688-60918314b781 DeploymentGroupDoesNotExistException: No Deployment Group found for name: ion-pet-Compute-1Q6752A2P837M#2018-12-0516:25PBCan anyone tell me how to find my deployment group. THe docs are pretty bad around this. https://docs.datomic.com/cloud/ions/ions-reference.html#deploy#2018-12-0516:26PBBeing that there is no :deployment-group I used $(SystemName)-Compute-$(GeneratedId) but the docs suggest that that is for group#2018-12-0517:09jaret@petr to deploy you replace $(GROUP) with the name of your compute stack. are you sure the C is capitalized in “Compute”? You can see the name in the CF console https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks?filter=active#2018-12-0517:12PBThe C is defintiely capitalized. I copied the name of the stack from the UI in cloudfront#2018-12-0517:13PBThis is my last failed attempt:
clojure -Adev -m datomic.ion.dev '{:op :deploy, :rev "674e654d429ebc5092be8008b1463617720a1a7c", :uname "pet-service-sample", :group "ion-pet-Compute-1Q6752A2P837M", :region "us-east-2"}'#2018-12-0517:25PBIf i omit the random identifier, it doesn't even accept the request#2018-12-0517:25jaret@petr what is your output from your Push command?#2018-12-0517:26jaretit will have a :deploy-command that is generated for you#2018-12-0517:26PBclojure -A:dev -m datomic.ion.dev '{:op :push :region "us-east-2" :uname "pet-service-sample"}'
Downloading: com/datomic/java-io/0.1.11/java-io-0.1.11.pom from
(cognitect.s3-libs.s3/upload "datomic-code-8fe8a54a-0daf-48b9-a4b8-bbdf996b81ae" [{:local-zip "target/datomic/apps/ion-pet-service/unrepro/pet-service-sample.zip", :s3-zip "datomic/apps/ion-pet-service/unrepro/pet-service-sample.zip"}] {:op :push, :profile "devthenet", :region "us-east-2", :uname "pet-service-sample"})
{:uname "pet-service-sample",
:region "us-east-2",
:deploy-groups (),
:dependency-conflicts
{:deps
{commons-codec/commons-codec #:mvn{:version "1.10"},
org.clojure/tools.analyzer.jvm #:mvn{:version "0.7.0"},
com.fasterxml.jackson.core/jackson-core #:mvn{:version "2.9.5"},
org.clojure/tools.reader #:mvn{:version "1.0.0-beta4"},
org.clojure/core.async #:mvn{:version "0.3.442"}},
:doc
"The :push operation overrode these dependencies to match versions already running in Datomic Cloud. To test locally, add these explicit deps to your deps.edn."},
:deploy-command
"clojure -Adev -m datomic.ion.dev '{:op :deploy, :group <group>, :uname \"pet-service-sample\", :region \"us-east-2\"}'",
:doc
"To deploy, issue the :deploy-command, replacing <group> with a group from :deploy-groups"}
#2018-12-0517:27PBclojure -Adev -m datomic.ion.dev '{:op :deploy, :rev "674e654d429ebc5092be8008b1463617720a1a7c", :uname "pet-service-sample", :group "ion-pet-Compute-1Q6752A2P837M", :region "us-east-2"}'
Is what I got when I took the deploy-command and added the group#2018-12-0517:29jaretwhats in your ion-config.edn?#2018-12-0517:32jaretAlso, @petr what version of Cloud are you running?#2018-12-0517:32PB{:allow [ion-sample.ion/app]
:lambdas {:app {:fn ion-sample.ion/app :description "Exploring Ions with Pedestal"}}
:app-name "ion-pet"}
#2018-12-0517:32PBI deployed it yesterday, so the latest version#2018-12-0517:35PBSo I realised that I hadn't committed that change. So I committed pushed and deployed. It's still failing, but this time cloudwatch doesnt' give anything useful
17:31:29
START RequestId: cdd511ba-86b4-4242-9abb-e587c536b404 Version: $LATEST
17:31:29
2018-12-05T17:31:29.248Z cdd511ba-86b4-4242-9abb-e587c536b404 { event: { codeDeploy: { deployment: [Object] }, lambda: { cI: 0, c: [Object], uI: -1, u: [], dI: -1, d: [], common: [Object] } } }
17:31:30
END RequestId: cdd511ba-86b4-4242-9abb-e587c536b404
17:31:30
REPORT RequestId: cdd511ba-86b4-4242-9abb-e587c536b404 Duration: 1271.95 ms Billed Duration: 1300 ms Memory Size: 128 MB Max Memory Used: 38 MB
No newer events found at the moment. Retry.
#2018-12-0517:39PBclojure -Adev -m datomic.ion.dev '{:op :deploy-status, :execution-arn arn:aws:states:us-east-2:272695641059:execution:datomic-ion-pet-Compute-1Q6752A2P837M:ion-pet-Compute-1Q6752A2P837M-pet-service-sample-1544031395342, :region "us-east-2"}'
{:deploy-status "FAILED", :code-deploy-status "FAILED"}
#2018-12-0517:39marshall@petr can you try putting in the region explicitly#2018-12-0517:39marshallas a :region key#2018-12-0517:39marshallin the deploy map#2018-12-0517:39marshalloh, you did#2018-12-0517:40marshallhrm#2018-12-0517:41jaretI think you’re on the right track though. This has to be a permissions/creds or region thing. I just tested on a new stack and my deploy command was populated with the compute stack as the $group#2018-12-0517:41marshallwell, actually, did you provide an “application name” in your CFT when you launched?#2018-12-0517:41marshallif you did, the group name is that not your comput group name#2018-12-0517:41PBI am an admin on this account. I did provide an application name#2018-12-0517:42PBion-pet#2018-12-0517:42marshalllook in the outputs of the stack in the CF dashboard#2018-12-0517:42marshallsorry, by that i mean the app name#2018-12-0517:43PBSystemName ion-pet System Name#2018-12-0517:43jaret#2018-12-0517:43marshallcan you look in your cloufformation dashboard#2018-12-0517:43marshallfind the stack you launched (the compute stack)#2018-12-0517:43marshalland in outputs find CodeDeployDeploymentGroup#2018-12-0517:43PBCodeDeployDeploymentGroup ion-pet-Compute-1Q6752A2P837M CodeDeploy Deployment Group#2018-12-0517:44jaretwhat do you have under “AvailabilityZone1”?#2018-12-0517:44PBAvailabilityZone1 us-east-2b AvailabilityZone1#2018-12-0517:45marshallyou have a :rev and a :uname -> i dont think that would cause this, but i would expect you only to have one or the other#2018-12-0517:46PBThe example I was following specified one:
To push the project to your Datomic Cloud environment, execute the
following command from the root directory of the sample project:
`clojure -A:dev -m datomic.ion.dev '{:op :push :uname "pet-service-sample"}'`
We provide a `:uname` key because the sample has a `:local/root` dependency.
This command will return a map containing the key
`:deploy-command`. Copy the value and execute it at the command line
to deploy the ionized app. You will need to unescape the `:uname` value.#2018-12-0517:47marshallah right, the local dep#2018-12-0517:47marshalln/m#2018-12-0517:47marshallhttps://console.aws.amazon.com/codesuite/codedeploy/home?#2018-12-0517:47marshall^ go there#2018-12-0517:47marshallyou should be able to look at your list of codedeploy groups#2018-12-0517:48PBI can see 2 attempted deployments#2018-12-0517:48PBCould it be that the example code does not work and its' failing because it's unable to start the app?#2018-12-0517:49marshallnot if the error is about not finding the group#2018-12-0517:49marshallif that’s not the error, then yes#2018-12-0517:49marshallif you click on the latest failed deployment you can look at the reported cause#2018-12-0517:50PBSo once I made i commited and pushed the change to ion-config.edn, the deployment group error went away#2018-12-0517:50jaretah!#2018-12-0517:50PBBut now it just tells me that the deployment failed#2018-12-0517:50jaretyeah, then that means the code is failing/unable to start the app#2018-12-0517:51PBOh man, that sucks#2018-12-0517:51jaretBut hey! We found the deployment group 🙂#2018-12-0517:51jaretIt was right where you said it was 🙂#2018-12-0517:51PBYeah! Thanks for that!#2018-12-0517:52PBSo do you think it's that https://github.com/pedestal/pedestal-ions-sample is just not compatible with the new cloud or there is just a mistake somewhere?#2018-12-0517:55marshallDo you see an error in your cloudwatch logs#2018-12-0517:57PBI do not#2018-12-0517:58PBAt least not in the compute#2018-12-0518:05marshalllatest log group? the redeploy usually creates a new log group within the stream#2018-12-0518:08jaret@petr these docs show an example of navigating the log group for exceptions#2018-12-0518:08jarethttps://docs.datomic.com/cloud/troubleshooting.html#http-500#2018-12-0518:08PBThanks!#2018-12-0518:09PBI think I'm going to try to redeploy with the exact name of the system used in the example (later today). I'll report back#2018-12-0519:44lwhortonam i right in thinking that not null requirements on a standard sql column is a recipe for never being able to evolve your schema into the future? if i had foo varchar not null and a year later we deprecate foo to instead use bar, i dont want my application layer to have to carry the baggage of “filling in” old foos as well as bars?#2018-12-0519:47lwhortoni just want to leave the app code alone which deals with foos, and everything new uses bars instead.#2018-12-0519:51eraserhdPostgres, at least, allows efficient dropping of a "not null" constraint.#2018-12-0520:00lwhortonis it more useful to have nothing marked not null than* to pick and choose and in the future drop it?#2018-12-0520:00lwhortonor am i being silly by leaving everything nilable?#2018-12-0520:02eraserhdIt depends?#2018-12-0520:05eraserhdIn practice, I've seen a lot of Java code that has to check if every thing it touches is null. This comes from not having a coherent idea throughout the system of what null means (error? not yet filled in?). This is wasteful and super painful. The "not null" constraint on fields might be useful to communicate to other devs, who cannot push code which violates contracts. That said, there are other ways to validate contracts, and which is best depends on the domain.#2018-12-0520:05eraserhdAnd if you don't have other developers, it's moot. Unless you have an old, dodgy brain like me.#2018-12-0520:06lwhortonhaha, 👍#2018-12-0520:10Alex Miller (Clojure team)Datomic (and Clojure) strongly embrace the idea that you should say things that you know and omit things that you don’t#2018-12-0520:10Alex Miller (Clojure team)or omit stating the absence of a thing, if that makes sense#2018-12-0520:11Alex Miller (Clojure team)so, I do not think it is silly#2018-12-0520:11lwhortoni just watched rich’s last talk “maybe not” and was trying to apply it to this case where i dont have datomic but want datomic in postgres#2018-12-0520:11Alex Miller (Clojure team)but as with anything, it depends :)#2018-12-0520:13lwhortoni do like the idea of nil means ‘i dont know’ not ‘empty to satisfy a constraint’#2018-12-0520:14Alex Miller (Clojure team)if you don’t know, then why say anything at all? :)#2018-12-0520:17lwhortonwell, i guess to be more clear (arghh the english language): i like the idea of nil meaning “look person, i dont even know what you’re talking about…“, which currently cant be represented in postgres (unlike datomic where you can simply omit something). and maybe the best way to do that is simply nil the field#2018-12-0520:17eraserhdYou could always make one Postgres table called "facts" with "e" "a" and "v" columns .... (jk, don't do this)#2018-12-0520:18lwhortonbetween triggers, log tables, and a reified transactions table with some cleverness we’re getting close to about 25% of the power of datomic#2018-12-0520:18ro6Somewhere I read that the Drupal CMS basically does that, never confirmed it though.#2018-12-0520:20shaun-mahood@lwhorton: Anecdotally, I've worked with systems that blow up hard if there's a null in the wrong place - particularly where it is either a vendor database and so I can't change anything, or if there are legacy or external systems that I'm integrating one of my databases with. I'm seriously considering stripping out nulls as part of my JDBC queries as a step towards migrating to Datomic and to make the Clojure code a little more consistent (though this may be a terrible idea).#2018-12-0520:24lwhortonthanks for the insight @shaun-mahood. i’m working in an elixir system and also very much considering modifying the db connector to auto-strip nulls too. as alex said, if we dont know it why bother even declaring we dont know it.#2018-12-0613:49Dustin GetzHistory modeling question. Say you have :post/url-slug :keyword :identity. And when you change the slug, you want to prevent breakage by remembering the old slugs. Is slug :cardinality :many, or should we use history for this?#2018-12-0617:11val_waeselynckI'll keep shouting it: don't use history to implement your application logic! https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html 😛#2018-12-0617:11val_waeselynckMaybe have 2 attributes, one which is the current slug, another which keeps track of past slugs?#2018-12-0617:16benoitAgree with @U06GS6P1N. I would store multiple slugs in this case, especially if you don't care which one is the last one and only use them to resolve URLs. But if I could take a step back I would try to put an immutable identifier in this URL for resolution.#2018-12-0617:38lwhortonthinking about your post there @U06GS6P1N… if you’re concerned about the earth-time that something actually happened (and you probably should be) would it not be better for pretty much every entity to have an created-at in addition to the automatic tx instant?#2018-12-0617:39lwhortonthat would enable clients or really any other process (for example, offline-mode clients) to hold onto the when and not conflate the when-it-actually-happened with when-i-learned-about-it#2018-12-0617:40lwhortonthough i suppose it depends on the needs of your application. how important the “event time” is compared to the “recording time” also seems like a domain concern#2018-12-0621:01Dustin GetzIt is not clear to me that preventing breakage of public identities over time should be considered application logic#2018-12-0621:23benoitI think val's "don't use history to implement your application logic" is a shortcut. Sometimes it makes sense to use history for application logic when real-world time = database time. In your case, I'm guessing that this slug started to exist when it was created in the database so the two times coincide. So it would be correct to use history for that. I think it all depends whether you want the :url/slug attribute to mean "the last slug for this resource and the one to use when publishing this URL" or "all the slugs that redirect to this URL".
Another thing to consider is that if you use those slugs to identify resources you might want to ensure unicity. You might not want to look at the whole history when you create a new slug to detect collisions. A cardinality many with a "identity" flag seems easier.#2018-12-0621:29lwhortonwhen real-world time = database time seems to me to affect two scenarios*: any sort of ‘offline mode’ feature, and any time you have a queueing system to process heavy load, where ‘real world event time’ != ‘database time’ (by a significant enough margin to matter)#2018-12-0621:54benoit@U0W0JDY4C Not sure I understand what you mean. The difference between the domain time and the database time is more general I think. If you record a fact that person P joined company C at a certain date then it is "domain time" and datomic history will not help you with that. But val's article showed that even if what you model are entities that could coincide with database time (what happens to a blog post is what gets recorded to the database), it is still not a good idea to rely on the history functions to implement features.#2018-12-0621:55lwhortonyes, sorry for the confusion. we are on the same page-- datomic doesn’t magically handle time related specifically to domains, and if you need domain time it’s important to model that explicitly.#2018-12-0616:14marshallANN:
If you are running Datomic Cloud Production topology and are using a VPC Endpoint (as detailed here: https://docs.datomic.com/cloud/operation/client-applications.html#create-endpoint), we are considering improvements that impact this service and would like to hear from you.
Please email us (<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>) or respond on the forums with a short description of your use case for the VPC Endpoint.
https://forum.datomic.com/t/requesting-feedback-on-vpc-endpoint-use/721#2018-12-0621:15kennyIf I delete a large Datomic DB (10-50m datoms), Datomics DDB read provisioned spikes to 25 and read actual to 250 for a while. Is there a reason for this?#2018-12-0621:18kennyFurther, shouldn't Datomic have auto scaled the DDB reads up to at least 100?#2018-12-0621:19kennyRead actual stayed up at 250 for about 15 mins.#2018-12-0717:09SalI was reading datomic’s tutorial on transacting schemas referenced here: https://docs.datomic.com/on-prem/getting-started/transact-schema.html#2018-12-0717:11SalBut I notice in other sites or examples, there are two additional attributes specified that are not in the tutorial: :db.install/_attribute and :db/id#2018-12-0717:12SalIs there a reason those attributes are not presented in the tutorial?#2018-12-0717:12favila@fuentesjr those used to be required, but now are not#2018-12-0717:12favilathe other examples you are seeing are older#2018-12-0717:13Salare those attributes no longer necessary, or do they simply have default values that can be overriden?#2018-12-0717:14favilathe attributes are there, you just do not have to supply them#2018-12-0717:14favilain every case db id should be tempid in :db.user/db partition and :db.install/_attribute should be :db.user/db#2018-12-0717:15favilathey just went from requiring them for installation to inferring them#2018-12-0717:15favila(inferred from the presence of :db/type and :db/cardinality probably)#2018-12-0717:16favilawhat those did was add the new attribute to the :db.user/db entity (entity id 0) on its :db.install/attribute attribute#2018-12-0717:16favilanow that happens without you asking for it#2018-12-0717:17favilahttps://docs.datomic.com/on-prem/changes.html#0.9.5530#2018-12-0717:18favilareleased 2016-11-28#2018-12-0717:24Salso by default now all schemas are created in the db.part/user partition when those attributes are not specified?#2018-12-0717:25SalAlso, are the entity ids unique across all of the db or are they unique only within the partition they reside?#2018-12-0717:26ro6@fuentesjr I think so. It seems like partition control in general has fallen out of common practice. I'm not sure it's even possible in Datomic Cloud. I think I read somewhere that they found it wasn't being used much and may not be "worth its weight" in the API.#2018-12-0717:27ro6Not sure about your second question.#2018-12-0717:28favilaI'm pretty sure schema is still put in the db.part/db partition#2018-12-0717:29Salinteresting#2018-12-0717:29favilathis was mostly an interface change (you no longer need to include those assertions in your tx data) not a functionality change#2018-12-0717:29favilaif you look at the datoms added after the tx I am pretty sure they are the same#2018-12-0717:30ro6My understanding is that partitions are primarily a means of improving performance by controlling which of your datoms get indexed/cached "closer" to each other. I'm not sure if they have other functional/semantic implications.#2018-12-0717:30favilathey don't, but schema is supposed to go in db partition and I don;t think they changed that#2018-12-0717:31favilapractically speaking this just means that the entity id of attributes will be smaller#2018-12-0717:32SalI see. So by looking at the presense of :db.type and/or :db/cardinality, they store those datoms in db.part/db otherwise … they store them in db.part/user#2018-12-0721:44benoitThis "best practice" in the Datomic docs might need to be updated. https://docs.datomic.com/cloud/best.html#optimistic-concurrency
I'm getting a java.util.concurrent.ExecutionException containing an ExceptionInfo with {:db/error :db.error/cas-failed}. Is that how to detect CAS failures now?#2018-12-0809:14jwkoelewijndoes anyone have some tips regarding troubleshooting a hanging Push with datomic-ions? The last thing i see is Downloading: com/datomic/java-io/0.1.11/java-io-0.1.11.pom from and then it hangs while having 99% cpu usage#2018-12-0809:19tomjackhmm, I was just struggling with the same symptom#2018-12-0809:19tomjackhttps://dev.clojure.org/jira/browse/TDEPS-79#2018-12-0809:19tomjackbut unrelated to datomic#2018-12-0810:18jwkoelewijnhmmm, interesting, will have a look, thanks#2018-12-0909:13jwkoelewijnI seem to have found the culprit: for some reason I had "target" in my :paths section. Removing this removed the hang and enabled me to push and deploy#2018-12-0904:07steveb8nSeeking your opinion: after following the Ions tutorials where each fn is it’s own lambda, then trying out a single request handler fn/Ion, the single Ion seems much better. The main reason is cold starts: with a single Ion, there are a lot less cold starts for users. It means using less of the API Gateway machinery but this is actually a good thing if you want a local dev server. So that’s two compelling reasons to use a single entry-point. What am I missing in this assessment?#2018-12-1017:46ro6That's been my conclusion so far as well. There may be tasks around long-term API maintenance that the Gateway features help with, but I haven't reached that problem yet.#2018-12-1017:50Joe LaneSecurity through cognito is certainly one usecase. It would allow you to disentangle biz logic from the auth(z) code.#2018-12-1017:51Joe LaneIf you can isolate all that stuff at the boundary it can simplify quite a lot of stuff. But thats kind of a design+biz tradeoff for whether you want to separate auth(z) from biz code.#2018-12-1017:52Joe LaneOn the one hand you could trust that the functions are only run by properly authorized roles if you have a mechanism to ensure all function invocations are piped through cognito.#2018-12-1017:53Joe LaneOn the other hand, whats the consequence for getting it wrong because of a typo if you decouple them? Does a user in a game get to do something they shouldnt? nbd. Does your firm have a catastrophic hippa violation? Company ends with lawsuits burning it to the ground.#2018-12-1017:54ro6in that case, perhaps you'd complect on purpose#2018-12-1018:55joshkhfor what it's worth i started by deploying many "atomic" functions behind API Gateway routes, and then eventually folded them in to one proxy resource to avoid cold starts. my reasoning was that some end points are very important but not used often, and the cold start of those end points resulted in a poor user experience.#2018-12-1018:58joshkhi think the Ions tutorial leaves readers in a funny place - on one hand Ions advertises itself as atomic functions in the cloud, yet the tutorial steers readers to internal routing without demonstrating how to do so. you're left to choose one path or the other without knowing the consequences.#2018-12-1020:54steveb8nThanks for the thoughts. Re Cognito, I am using it already and I learned that with 1 interceptor I can replicate the checks that are done by API-GW. However I had to make an extra AWS call because the Cognito ID token doesn’t contain the roles but it’s the one used for decorating requests. Instead the auth handler needs to extra the role from the Access Token i.e. a bit of extra complexity at Auth time. Not a high price to pay. At this point I’m pretty much ready to not use Cognito roles and implement it myself because the local dev server can use that as well#2018-12-1108:34stijnI'll give my 2 cents: we have even given up on API GW as a proxy. There's 2 reasons.#2018-12-1108:36stijn1/ if you call the datomic lambda and that fails (e.g. after an ion deploy it happens frequently), you'll get back an internal server error, but api gw doesn't let you change the response on proxy methods. we would like to add some headers for CORS and set the response to e.g. 503, because a retry makes sense in these cases. you could solve that by adding another lambda in front i guess#2018-12-1108:40stijn2/ if you have large requests (> 6MB = the lambda limit), you have to find another way to get your data in/out. if you go the serverless way that would mean something like using presigned S3 urls for both upload and download. Also the max timeout for api gateway is 30s. maybe we are misusing all this, but file uploads / downloads is kind of crucial to our application#2018-12-1108:41stijnif you don't have any of these requirements I think api gateway is good, but i'd still use it with 1 proxy endpoint, 1 lambda and do the routing in the ion.#2018-12-0918:49bbloomin the context of datomic, what patterns to people tend to use to deal with “unknown values” (ie missing datums) and “known unknown values” (ie explicit nils) given that datomic doesn’t support the latter?#2018-12-0918:56favilaKnown unknowns are common in healthcare#2018-12-0918:56bbloomyeah, anything with a form that permits an “N/A” - which i’ve dealt with a lot#2018-12-0918:56favilaUsually there is some code that expresses it in the same coding system as whatever expresses a positive value #2018-12-0918:57favilaIn fields that have less extensive coding I’m not sure how to handle it without having two attributes#2018-12-0918:57favila(Because the type will be different)#2018-12-0918:57bbloomlike foo and foo_known?#2018-12-0918:57bbloomi’ve done that a bunch, but it tends to be fiddly#2018-12-0918:57favilaYeah and then a constraint that you have one or the other not both#2018-12-0918:57favilaYes it is fiddly#2018-12-0918:58bbloomi’m curious why Datomic is the way it is, especially given the context of the recent Maybe Not talk#2018-12-0918:58bbloomi wonder if there are some technical reasons having to do w/ indexing or the datalog implementation - or if it’s just an oversight, but i’d be skeptical of the latter#2018-12-0918:59favilaAnother pattern that I use for polymorphic attrs in general in datomic is this#2018-12-0919:00favila{:attr/base :attr/baseTYPE :attr/baseTYPE VAL}#2018-12-0919:00favilaI’ve never thought of using this to express known unknowns but it seems possible#2018-12-0919:01favila:attr/nameUnknown and then the value is an enumeration to kind of known#2018-12-0919:01bbloomjust a generalized tagged union? yeah seems like that’s perfectly reasonable#2018-12-0919:01favilaEssentially, but a way that cooperated well with daatalog and datomic’s model#2018-12-0919:02favila[?e :attr/base ?a][?e ?a ?v]#2018-12-0919:02bbloomah - clever#2018-12-0919:04bbloomseems like maybe it’s intentional for known/unknown to be encoded “one level up”#2018-12-0919:05favilaNil is convenient for simple cases of “I know about this attr but I don’t know the value” but there are more dimensions of unknownness#2018-12-0919:06bbloomfor sure, i’ve encountered many different variants of nil 🙂#2018-12-0919:06favilaNil can blur those the same way using a Boolean vs a enum can#2018-12-0919:06bbloomyup#2018-12-0919:06bbloombut a variant type might be nice#2018-12-0919:07favilaFor fun google “hl7 nullflavor”#2018-12-0919:07bbloomie string or keyword, so you could do ‘Brandon’ or :unknown, or :not_yet_named or whatever#2018-12-0919:07favilaExtreme example of this#2018-12-0919:07bbloomheh, i’ve seen this 🙂 fuuuun times#2018-12-0919:08bbloomalthough this brings up a related problem i’ve encountered a bunch: the “when” of classification#2018-12-0919:08bbloomie do i have just one nil value? or do i have 10 different keywords? i may need to distinguish, but i may also want to just treat them all the same#2018-12-0919:08bbloomand can’t put metadata on nil 😉#2018-12-0923:02idiomancyIf anyone has time, I could use help structuring this query a little better 😕
The issue is it's an or join situation, but each branch is conceptually joined to the branch that came before it. So..
What I'm trying to say is get the event where:
the event is a link issued (has a link) that was sent to this recipient-address,
OR the event is a session creation (has a session) that originated with (has reference to) said link issued event
OR the event is a session joined (has a session) that originated from afore specified session creation event
phrased another way, given the following events with the following keys:
------------------
link/created: [link, recipient-address]
session/created: [session, link]
session/joined: [session]
I want all events that relate to that recipient address.#2018-12-0923:03idiomancyMy best effort so far has produced this:#2018-12-0923:03idiomancy(patch/q
'[:find [?e ...]
:in $ ?email
:where
[?e ::tid ?tid]
(or-join [?email ?tid]
(and [?e :recipient_address ?email]
[?e :magiclink ?magic]
[?e ::tid ?tid])
(and [?e :recipient_address ?email]
[?e :magiclink ?magic]
[?e2 :session ?session]
[?e2 :magiclink ?magic]
[?e2 ::tid ?tid])
(and [?e :recipient_address ?email]
[?e :magiclink ?magic]
[?e2 :magiclink ?magic]
[?e2 :session ?session]
[?e3 :session ?session]
[?e3 ::tid ?tid]))]
db
"#2018-12-0923:17idiomancyokay, I've gotten it running... but this can't possibly be the best way to do it#2018-12-0923:17idiomancyediting the above^^^#2018-12-0923:19idiomancyso that's the rawest of the raw ways to do that, and doesn't at all take advantage of the fact that the steps are kind of an accumulation of the previous steps plus something else#2018-12-1000:55benoit@idiomancy The logic makes sense to me. As for performance, I'm not sure. You could try to separate the queries to see for yourself. You could get all the events representing when the links were issued and then get the related events in two other queries. But I'm not sure that would necessary speed up anything. I would be interested to know what you find.#2018-12-1000:58idiomancyhmm. I feel like I must be missing something.#2018-12-1000:59idiomancyI guess they are referring to separate groups of entities 🤔#2018-12-1001:00benoitOh you might have a logic issue in the second or clause. The ?session var is used only once.#2018-12-1001:01idiomancymein got! 😱 good catch!#2018-12-1001:11idiomancyhahaha, interesting! yeah, you actually made me realize that those joins are superfluous!#2018-12-1001:12idiomancyit doesn't matter that ?e2 has a session in the second case or indeed or that ?e has a magiclink in the first case!#2018-12-1001:34benoitYou also likely don't need the tid, you can just join on the ?e events you're looking for.#2018-12-1012:54arnaud_bosIf anyone wants to help, I'm still struggling with the datomic getting-started guide.
I've finally retrieved datomic-pro dependency from the repo (using leiningen, deps still doesn't work) and now I'm seeing a weird exception when opening my repl:
I've setup the smallest repro case I could here: https://github.com/arnaudbos/thisisnotalovesong
This is basically just
(require '[datomic.client.api :as d])
(def cfg {:server-type :peer-server
:access-key "myaccesskey"
:secret "mysecret"
:endpoint "localhost:8998"})
(def client (d/client cfg))
And then java.lang.IllegalArgumentException: Unable to load client, make sure com.datomic/client is on your classpath#2018-12-1013:36mping@arnaud_bos I guess you are missing the datomic client lib#2018-12-1013:36mpinghttps://github.com/arnaudbos/thisisnotalovesong/blob/master/deps.edn#2018-12-1013:24thegeez@arnaud_bos the client lib to connect to a running datomic instance is a separate library: com.datomic/client-pro {:mvn/version "0.8.28"} https://docs.datomic.com/on-prem/getting-started/connect-to-a-database.html#2018-12-1013:30arnaud_bosAh, I see, I did mix info from the getting started guide and from the http://my.datomic.com/account page...#2018-12-1013:30arnaud_bosThank you!#2018-12-1015:06grzmI’ve run into an issue running (datomic.ion.cast/initialize-redirect :stdout) on CIDER. Works when redirecting to :stderr or to a file. I’ve posted a repro case in the hopes someone with more CIDER-fu might be able to figure it out more quickly than I can: https://github.com/grzm/cider-ion-cast-stackoverflow (Also posted in #cider)#2018-12-1015:43m_m_mHi all. Am I right that Datomic in a free version is only based on memory not SSD?#2018-12-1015:45benoitIt only supports local storage (disk). The PRO starter supports all storages.#2018-12-1015:50m_m_mdo you know what is the minimal price for the pro version? I can't find it on their site. There is only some AWS calculator.#2018-12-1015:51marshallDatomic On-Prem information: https://www.datomic.com/get-datomic.html#2018-12-1016:58Dustin GetzIdea: pull through relations as if ref https://gist.github.com/dustingetz/cfd6882e2acae6e8b48759ec24c4de0a#2018-12-1018:48joshkhmy Ions lambdas are returning the following:
java.net.ConnectException: Connection refused
and my local SOCKS connection to my cloud instance is returning the following stack trace. any clues?
:cognitect.anomalies/category :cognitect.anomalies/fault, :cognitect.anomalies/message SOCKS4 tunnel failed, connection closed, :cognitect.http-client/throwable
#error {
:cause SOCKS4 tunnel failed, connection closed
:via
[{:type java.io.IOException
:message SOCKS4 tunnel failed, connection closed
:at [org.eclipse.jetty.client.Socks4Proxy$Socks4ProxyConnection onFillable Socks4Proxy.java 165]}]
:trace
[[org.eclipse.jetty.client.Socks4Proxy$Socks4ProxyConnection onFillable Socks4Proxy.java 165]
[org.eclipse.jetty.io.AbstractConnection$ReadCallback succeeded AbstractConnection.java 281]
[org.eclipse.jetty.io.FillInterest fillable FillInterest.java 102]
[org.eclipse.jetty.io.ChannelEndPoint$2 run ChannelEndPoint.java 118]
[org.eclipse.jetty.util.thread.strategy.EatWhatYouKill runTask EatWhatYouKill.java 333]
[org.eclipse.jetty.util.thread.strategy.EatWhatYouKill doProduce EatWhatYouKill.java 310]
[org.eclipse.jetty.util.thread.strategy.EatWhatYouKill tryProduce EatWhatYouKill.java 168]
[org.eclipse.jetty.util.thread.strategy.EatWhatYouKill produce EatWhatYouKill.java 132]
[org.eclipse.jetty.util.thread.QueuedThreadPool runJob QueuedThreadPool.java 762]
[org.eclipse.jetty.util.thread.QueuedThreadPool$2 run QueuedThreadPool.java 680]
[java.lang.Thread run Thread.java 844]]},
:config {:server-type :cloud, :region .., :system .., :query-group .., :endpoint http:// entry.. /, :proxy-port 8182, :endpoint-map {:headers {host entry..}, :scheme http, :server-name entry.., :server-port 8182} } }
#2018-12-1019:19marshall@joshkh if you’re using ions your server type should be :ion not :cloud#2018-12-1019:42joshkhstarting a thread because... slack! i'm seeing a CloudWatch alarm for ConsumedReadCapacityUnits <750 for 15 datapoints, and ConsumedWriteCapacityUnits < 150 for 15 datapoints#2018-12-1019:46joshkhand a cliff edge in the metrics that went from N capacity units to 0.#2018-12-1019:51marshallThe CW alarms for capacity are not relevant. they’re used by autoscaling policies internal to AWS (i.e. dynamodb) to trigger scaling up or down#2018-12-1020:00joshkhah, that's good to know. thanks Marshall. i'll comb through the logs and look for something useful. nothing in the local or remote config has changed - things just stopped working, although i know that another dev has been running some transactions (but no config changes). we had a similar problem a few months ago when our :cloud went down for 24 hours until we solved it by upgrading from a very early release to the split compute/storage stacks.#2018-12-1020:04marshallWhat version and what deployment *(solo or prod)#2018-12-1020:04marshallAlso, can you take a look at your CloudWatch dashboard for that system and see what the instance CPU usage looks like?#2018-12-1020:06joshkhis it okay if we take it to a DM for privacy reasons? happy to post any useful results after. 🙂#2018-12-1020:08marshallyup#2018-12-1019:27joshkhhmm, it is :ion in my code despite what the exception says.#2018-12-1019:28joshkhand i'm seeing channel 2: open failed: connect failed: Connection refused in my proxy connection#2018-12-1019:30joshkhi noticed the problem locally, then hit the remote ions via my API gateway and saw they were down as well without deploying any changes. i can't say for sure but it feels like something tipped over.#2018-12-1019:49marshallhave you committed and pushed your ion code? is it possible you’re running code that is using an older config?#2018-12-1019:50marshallif not, i would look in your CloudWatch Logs in the log group named datomic-<yourSystemName>#2018-12-1019:50marshalland see if the ions are firing and reporting any errors there#2018-12-1021:54jaretDatomic Cloud 454 and Ions release
https://forum.datomic.com/t/datomic-cloud-454-and-ions-release/732#2018-12-1109:15stijnnice release! preloading databases during deploy is a big improvement to us#2018-12-1109:15stijnwhat is considered an 'active database'?#2018-12-1114:14ro6This is great stuff! The longer CodeDeploy timeout means I can switch back to using Mount (which I like for development reloading) and still eager load things like the db connection.#2018-12-1023:07grzm@jaret Does that include an update of the Cognitect HTTP library to allow the use of the new (wonderful) AWS API in Ions as well? @marshall indicated that might be in this version. (Please say yes! Please say yes!)#2018-12-1103:26henrikIs this in reference to https://github.com/cognitect-labs/aws-api ?#2018-12-1023:10marshallYep @grzm #2018-12-1023:17grzmI think if you look out your window towards the upper midwest you’ll see the glow from my beaming smile 🙂#2018-12-1023:18marshallYes, i see it shining through the foot deep snow drifts in NC :D#2018-12-1112:49joshkhwondering if someone from cognitect is around to help? yesterday we upgraded from solo to production after our ec2 instance tipped over. the upgrade went well and we can connect to our cloud instance, but we can't deploy our existing ion functions.#2018-12-1112:55Alex Miller (Clojure team)I’m from Cognitect, but probably not qualified to help. But if I were, I would ask what “can’t deploy” means#2018-12-1113:06joshkhwhoops. i solved the problem shortly after asking. that's how it works right? 😉 the upgrade path to production failed so we had to delete the existing compute stack and install as-new (using the previous app-name). our bastion cloud connections came back, however the already-deployed ions were throwing an internal server error, and re-deploying them resulted in {:deploy-status "FAILED", :code-deploy-status "FAILED"}. i tested the lambdas via the AWS console which worked as expected, and a third code push finally succeeded. i still had to edit our API gateway, reselect the proxy resource and authorizer functions, then deploy the API.#2018-12-1113:07joshkhjust a wild guess, but i'm assuming that the new stack with the old app name crossed some wires. problem solved.#2018-12-1114:25ro6Is there a common practice for purposefully triggering rollback of the CodeDeploy from within the app? Eg if a Datomic schema migration fails in production or another startup condition isn't met. What condition is CodeDeploy polling to determine that "the service is up"?#2018-12-1115:16marshall@robert.mather.rmm The Datomic process not starting is the most common cause of rollback
usually caused by a bug or deps conflict in an ion that throws when the ns is loaded#2018-12-1115:30ro6@marshall Can my app explicitly signal an issue and cause CodeDeploy to roll back?#2018-12-1115:30Joe LaneThrow an exception when trying to load a namespce.#2018-12-1115:48Joe Lane@robert.mather.rmm ^^ Do you think this covers your usecase?#2018-12-1115:58ro6Probably. I'll have to try it out. I generally set an uncaught exception handler (ala https://stuartsierra.com/2015/05/27/clojure-uncaught-exceptions), but maybe I can do that last in my startup process.#2018-12-1116:16stijn@robert.mather.rmm so, you are loading a bunch of stuff at compile time instead of at first request time?#2018-12-1117:38ro6Initially yes. The Code Deploy was timing out and rolling back because it only gave 2 minutes for startup. I switched to first request and doing everything lazy, I was hoping to switch back. Is the time to establish the Datomic connection proportional to data size or something? #2018-12-1117:39ro6To me it's quite desirable to be able to do schema transaction/migration and check everything worked before exposing the new instances to the world. #2018-12-1215:16stijnyes, I think it would benefit our use case too, just checking if it is possible. i'll try it out#2018-12-1116:17stijnhow does that work out with the 'database loading'?#2018-12-1117:17val_waeselynckFailing to deploy on Ion because of a mysterious error thrown when calling d/client: "Assert failed: cfg". Does anyone know what this could mean?
The error is thrown by:
(d/client {:server-type :ion
:region "eu-central-1"
:system "linnaeus"
:query-group "linnaeus"
:endpoint ""})
#2018-12-1117:17val_waeselynckHere's what the error looks like in Cloudwatch (reported via cast/alert):#2018-12-1117:18val_waeselynck#2018-12-1117:18val_waeselynckRunning on com.datomic/ion {:mvn/version "0.9.26"} and org.clojure/clojure {:mvn/version "1.9.0"} on a freshly-updated stack.#2018-12-1117:58marshall@val_waeselynck Can you go to the latest ion? (0.9.28) and also, what version of ion-dev are you using?#2018-12-1117:59val_waeselynck@marshall running on com.datomic/ion-dev {:mvn/version "0.9.176"}#2018-12-1118:00val_waeselynckLet me try updating ion#2018-12-1118:00marshallion-dev also#2018-12-1118:00marshallis now com.datomic/ion-dev “0.9.186”#2018-12-1118:00marshallhttps://docs.datomic.com/cloud/releases.html#2018-12-2716:21marshallinteresting
usually that ^ occurs when you’re on an instance that doesnt have IAM permissions to use S3#2018-12-2716:23marshallhave you run aws configure or otherwise set up your default AWS credentials>#2018-12-2716:28adamfreyyes, my laptop had AWS credentials for my work account in the ~/.aws/credentials file. I created a new AWS account for this tutorial and put that user name and password under the [datomic-tutorial] header in that file. But it doesn't seem to work with either credentials#2018-12-2716:29adamfreyshould I be able to do this?:
aws s3 ls
#2018-12-2716:29adamfreybecause I get access denied trying to do that as well#2018-12-2716:33adamfreyoh @marshall I just fixed it using your comment. My new IAM user needed S3FullPermissions to be attached#2018-12-2716:33adamfreythanks for your help!#2018-12-2716:33marshallGreat!#2018-12-2716:34marshallNp#2018-12-2720:01mrgHey, I'm running into java.lang.IllegalStateException: Attempting to call unbound fn: #'datomic.common/requiring-resolve with clojure 1.10.0 and datomic-free-0.9.5703#2018-12-2720:02mrgCould anyone point me in the right direction?#2018-12-2720:04mrgit's possible for me to (in-ns 'datomic.common) (def requiring-resolve clojure.core/requiring-resolve) but that doesn't seem right 🙂#2018-12-2720:15mrgoh, I thought this happened on transaction, but actually this is the offending function:#2018-12-2720:15mrg#2018-12-2720:25mrgAh, got it. Clojure/string is not part of the transactor and I need to use .toLowerCase instead. I'm coming from datahike where this worked#2018-12-2800:26rboydbesides datomic console, are there any notable tools to analyze/report on my database? specifically I'd like to understand how my db is growing or which entities account for the most used storage#2018-12-2820:18dogenpunkCould someone explain to me how to transact entities with components using the client API? I seem to have a critical hole (or two) in my understanding.#2018-12-2820:20dogenpunk{:db/ident :booking/rrule
:db/cardinality :db.cardinality/one
:db/valueType :db.type/ref
:db/isComponent true}
{:db/ident :rrule/frequency
:db/cardinality :db.cardinality/one
:db/valueType :db.type/long}
#2018-12-2820:22dogenpunkWhen I try transacting
{:booking/rrule {:rrule/frequency 1}}
I get errors re: tempids as only value#2018-12-2820:23dogenpunkBut when I transact
{:booking/rrule {:rrule/frequency 1 :_booking tempid}}
I get fault anomalies when I try to pull :booking/rrule attributes#2018-12-2820:24marshallwhat is the schema for :rrule/attr#2018-12-2820:25marshallhttps://github.com/cognitect-labs/day-of-datomic-cloud/blob/b4103e4a8f14518ed3f6d7f66f56cbf863117974/doc-examples/tutorial.clj#L100 is an example of transacting several component entities#2018-12-2820:27dogenpunkOk, I looked that over and thought that
{:booking/rrule {:rrule/frequency 1}}
would work, however, I keep getting “tempid used only as value” errors#2018-12-2820:28dogenpunkDo components have to be wrapped in a vector even if :db.cardinality/one?#2018-12-2820:29marshallI don’t believe so#2018-12-2820:29dogenpunkOr are components required to have unique ids aside from :db/id?#2018-12-2820:30marshalltry adding :db/id "foo" to the inner entity#2018-12-2820:30dogenpunkShould “foo” refer to the parent :booking entity tempid?#2018-12-2820:30marshallit shouldn’t “need it” but i’m wondering if there is an edge case here#2018-12-2820:30marshallno, a random tempid#2018-12-2820:30marshalldo you have a parent entity tempid?#2018-12-2820:31marshallif that’s a truncated ^ version of your transaction, can you share the full thing please?#2018-12-2820:33dogenpunkHere’s the full transaction:
{:booking/duration "PT1H", :booking/recur-set {:recur-set/rdate [], :recur-set/exdate [], :db/id "bar"}, :booking/student 60842575434612841, :booking/studio 16958867346817130, :booking/dtstart #inst "2015-04-06T21:30:00.000-00:00", :db/id "de92a84f-257c-47b9-bb14-6059bc534c4f", :booking/rrule {:rrule/frequency 1, :rrule/interval :rrule.interval/weeks, :db/id "foo"}, :booking/dtend #inst "2015-04-06T22:30:00.000-00:00", :booking/status :booking.status/scheduled, :booking/instructor 41539549297377384}#2018-12-2820:34marshalland :booking/duration is db.type string?#2018-12-2820:34dogenpunkYes#2018-12-2820:34marshallrrule and recur-set are cardinality 1?#2018-12-2820:34dogenpunkYes#2018-12-2820:35dogenpunkIf I remove :booking/rrule and :booking/recur-set the transaction succeeds#2018-12-2820:35marshallif you leave either one it is still an issue?#2018-12-2820:35dogenpunkYes, I have to remove both#2018-12-2820:36dogenpunkIf I replace the :db/id in the recur-set and rrule with the parent db/id it succeeds, but then I get faults when querying those attributes#2018-12-2820:36marshallright#2018-12-2820:37marshalltry making them “unnested” for testing:#2018-12-2820:37marshall{:booking/duration "PT1H",
:booking/recur-set "bar",
:booking/student 60842575434612841,
:booking/studio 16958867346817130,
:booking/dtstart #inst "2015-04-06T21:30:00.000-00:00",
:db/id "de92a84f-257c-47b9-bb14-6059bc534c4f",
:booking/rrule "foo",
:booking/dtend #inst "2015-04-06T22:30:00.000-00:00",
:booking/status :booking.status/scheduled,
:booking/instructor 41539549297377384}
{:recur-set/rdate [], :recur-set/exdate [], :db/id "bar"}
{:rrule/frequency 1, :rrule/interval :rrule.interval/weeks, :db/id "foo"}#2018-12-2820:40dogenpunk(let [{:keys [instructor student studio dtstart dtend duration status rrule recur-set]} booking-two
booking "baz"
tx-booking #:booking{:instructor instructor
:student student
:studio studio
:dtstart (java.util.Date/from dtstart)
:dtend (java.util.Date/from (t/>> dtstart duration))
:duration (.toString duration)
:status ((fnil keyword "booking.status" "scheduled") "booking.status" status)
:db/id booking
:recur-set "bar"
:rrule "baz"}]
(d/transact conn {:tx-data [tx-booking
{:rrule/frequency 1
:rrule/interval :rrule.interval/weeks
:db/id "baz" }
{:recur-set/rdate []
:recur-set/exdate []
:db/id "bar"}]}))
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:56).
tempid used only as value in transaction#2018-12-2820:45dogenpunkJust to be sure:
(d/transact conn {:tx-data [{:booking/duration "PT1H",
:booking/recur-set "bar",
:booking/student 60842575434612841,
:booking/studio 16958867346817130,
:booking/dtstart #inst "2015-04-06T21:30:00.000-00:00",
:db/id "de92a84f-257c-47b9-bb14-6059bc534c4f",
:booking/rrule "foo",
:booking/dtend #inst "2015-04-06T22:30:00.000-00:00",
:booking/status :booking.status/scheduled,
:booking/instructor 41539549297377384}
{:recur-set/rdate [], :recur-set/exdate [], :db/id "bar"}
{:rrule/frequency 1, :rrule/interval :rrule.interval/weeks, :db/id "foo"}]})
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:56).
tempid used only as value in transaction#2018-12-2820:46marshalli would try leaving one of the 2 components in (rrule or recur-set) and then remove one attr at a time from it#2018-12-2820:46marshallsee if you can narrow it to a specific one#2018-12-2820:46marshallthis is almost always caused by either type mismatch or cardinality issue#2018-12-2820:47dogenpunkOk, makes sense. I’ll see if I can get a minimal case#2018-12-2820:47dogenpunkBut, nesting a map for a component like this is supported?#2018-12-2820:49marshallit should be#2018-12-2820:49marshalli’m testing it also#2018-12-2821:02dogenpunkOk, this works:
(d/transact conn {:tx-data [{:booking/duration "PT1H",
:booking/student 60842575434612841,
:booking/studio 16958867346817130,
:booking/dtstart #inst "2015-04-06T21:30:00.000-00:00",
:db/id "de92a84f-257c-47b9-bb14-6059bc534c4f",
:booking/dtend #inst "2015-04-06T22:30:00.000-00:00",
:booking/status :booking.status/scheduled,
:booking/instructor 41539549297377384}
{:recur-set/rdate [], :recur-set/exdate [], :db/id "de92a84f-257c-47b9-bb14-6059bc534c4f", }
{:rrule/frequency 1, :rrule/interval :rrule.interval/weeks, :db/id "de92a84f-257c-47b9-bb14-6059bc534c4f", }]})#2018-12-2821:13marshall(def client (d/client cfg))
(d/list-databases client {})
(d/create-database client {:db-name "marshall-test"})
(def conn (d/connect client {:db-name "marshall-test"}))
(def schema [;; person
{:db/ident :person/email
:db/valueType :db.type/string
:db/unique :db.unique/identity
:db/cardinality :db.cardinality/one}
{:db/ident :person/name
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one
:db/isComponent true}
;; name
{:db/ident :name/first
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
{:db/ident :name/last
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}])
(d/transact conn {:tx-data schema})
(def data [{:person/email "#2018-12-2821:13marshall@dogenpunk ^ seems to work fine#2018-12-2821:17ronnyCould somebody help howto setup datomic cloud lambdas and logging?#2018-12-2821:53ronnyI tried lambda-logging together with clojure.tools.logging but I don’t find any documentation howto bring it to work.#2018-12-2823:00dogenpunk@marshall Thanks, I’ll dig in more.#2018-12-2900:01Joe Lane@ronny622 Use ion/cast. Sends info to cloudwatch.#2018-12-2917:19ronnyThanx a lot.#2019-12-3000:21jaihindhreddyTypo at
sed 's/iProcessing/Processing/g'#2019-12-3019:48jaihindhreddyAlso, typo at
sed 's/HMAC-SHA26/HMAC-SHA256/g'#2019-01-0218:04marshallI’ve fixed these. Thanks!~#2019-12-3017:23ronnyIs there a way to write unit test with datomic cloud? I tried to mock the database name to be an in-memory db for each test but didn’t work.#2019-01-0217:23kennyHi @UEUB9VA30. We have been using this lib for writing unit tests with the Datomic Client API: https://github.com/ComputeSoftware/datomic-client-memdb#2019-12-3017:48jaihindhreddyWhy does :db/unique require :db.cardinality/one? A person can have multiple emails and each email can still uniquely identify a person.#2019-12-3022:34favilaI'm not aware of any such restriction? I just made one in a mem db, no problem.#2019-12-3022:39jaihindhreddy@U09R86PA4 here's where it says so:
#2019-12-3022:39jaihindhreddyJust setting up ions. I'm yet to try Datomic.#2019-12-3022:43favilaThis might be cloud specific#2019-12-3022:43favilaOn prem doesn’t care#2019-12-3022:44favilaAs to why, donno. It can cause confusion (known by personal experience) when trying to upsert different entities in the same transaction#2019-12-3022:45favilaOr more precisely, what you think are different entities#2019-12-3022:46favilaSo either semantically they had a greenfield and decided it was a good idea but couldn’t do it on on prem for backward compat; or the impl of cloud drove them to it; or the docs are wrong#2019-12-3017:57jaihindhreddyIs this restriction due to technical or architectural reasons, or is multi-cardinality unique attributes a bad idea in some way I'm blind to?#2019-12-3018:02lilactownI would imagine there’s greater chances of checking uniqueness to be quite slow if it could be multi-cardinality. but honestly I don’t know#2019-12-3019:00lilactownhas anyone put REBL into their Ions app yet?#2019-12-3019:00lilactownabout to try it, wondering if there’s an example floating around#2019-12-3019:00lilactownI’m using Emacs/CIDER so I imagine I’ll have to futz with that 😕#2019-12-3019:43Adrian SmithI'm trying datomic for the first time what does this error mean? bin/transactor config/samples/dev-transactor-template.properties
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:, storing data in: data ...
System started datomic:, storing data in: data
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by io.netty.util.internal.PlatformDependent0 (file:/Users/adriansmith/Datomic/datomic-pro-0.9.5786/lib/netty-all-4.0.39.Final.jar) to field java.nio.Buffer.address
WARNING: Please consider reporting this to the maintainers of io.netty.util.internal.PlatformDependent0
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
#2019-12-3019:45lilactownI don’t think that should effect operation at all#2019-12-3019:48Adrian Smithah ok#2019-12-3019:57Adrian Smithshame we can't do brew install datomic with cask then brew services to control datomic#2019-12-3020:12dogenpunk@marshall circling back on my issue from Friday. Want to say thanks for your time investigating. The issue seems to be connected to the empty vectors in the components. Once I removed those, the components were properly stored and retrievable.#2019-01-0213:35marshallGlad you got it resolved!#2019-12-3020:16lilactowndoes the current datomic client API implement a nav-igable protocol for data returned by queries / etc.?#2019-12-3020:17lilactownI’m trying to use it with REBL and, e.g. when I pull an entity that has a ref to another entity, it doesn’t appear to have a custom nav implemented on it. just trying to figure out if PEBKAC or if it’s not out yet#2019-12-3100:58jaihindhreddyTo build something like Google Groups, because the posts can cross the 4096 character limit Datomic strings have, how/where would you store these?#2019-12-3101:01johnjin another database#2019-12-3101:01johnjin datomic you save the reference#2019-12-3101:03johnjon-prem doesn't mention this limit but I have heard long strings make on-prem slow, maybe that's why they set a limit in cloud#2019-12-3101:03jaihindhreddylike S3?#2019-12-3101:05johnjlike postgres, dynamo, or some other key-value store#2019-12-3101:09johnjs3 should work too I suppose but a database seems more fit for your use case#2019-12-3101:14lilactownI’m working on a thing for my writing and am thinking of using S3#2019-12-3101:15lilactownit’s cheap and reasonably fast#2019-12-3112:48eoliphantwhat JDK version do Datomic cloud instances (most recent, 454-8573) use?#2019-12-3120:39rboydwill datomic peers use memcached even if the transactor isn't?#2019-12-3120:53rboydI added "memcached=server:port" to a datomic pro starter cloudformation template, and based on memcached stats (bytes/cur_connections) I think it's using it correctly, but it hasn't added any new metrics to cloudwatch#2019-01-0213:36marshallYes, peers can use memcached even if the transactor isn’t using it
You’d only get cloudwatch metrics if you’ve configured Peer-level cloudwatch reporting via your own callback for metrics#2019-12-3122:39favilayes they will#2019-12-3122:39faviladonno about metrics#2019-01-0116:30augustlfor storing strings larger than 4k in datomic cloud, there's a number of ways to get that wrong, I suppose..? 🙂#2019-01-0116:31augustlwould this work? Generate a squuid, store the string in external storage along with that squuid, wait for external storage to report A-OK, store that squuid in datomic?#2019-01-0116:34lilactownI was thinking of using the hash of the string as the filename#2019-01-0116:37cjsauerWatch out for filename collision when two strings happen to be equal#2019-01-0116:37augustlsounds better to create an unique ID every time you want to create a fact for that string, if you override the old one, and the transaction fails, then external storage and datomic is out of sync#2019-01-0116:37augustland that 🙂#2019-01-0116:38lilactownI mean, if two strings are equal - then no need to store it again? 😄#2019-01-0116:39lilactownthere’s the chance of hash collisions but it should be fairly low#2019-01-0116:40cjsauerYeah that’s true...external storage would need immutable/accrete-only semantics then yeah? Every modification creates a new, for example, S3 object. #2019-01-0116:41lilactownyep exactly#2019-01-0116:42lilactownotherwise you couldn’t use historical queries to read past strings#2019-01-0116:42cjsauerRight. I need an app that shocks me every time I revert back to mutable place oriented thinking. #2019-01-0116:43lilactown😂#2019-01-0116:43lilactownwell, I’m working on a blog-esque type app right now so these problems are at the forefront of my mind#2019-01-0118:31Dustin GetzPing me if you make progress here, hyperfiddle is going to integrate a foreign string store soon too#2019-01-0118:55lilactownwill do. it’s one of my stretch goals, once I get the rest of the app up and running#2019-01-0121:15Dustin Getzi would appreciate that thank you!#2019-01-0116:44cjsauer@augustl I think this would prevent the out of sync issue you mentioned #2019-01-0116:45augustlyeah, seems like it would#2019-01-0116:45augustlonly downside I can think of is to have to create hashes for potentially large strings 🙂#2019-01-0116:45augustlbut for something like a blog that probably shouldn't be a problem#2019-01-0118:55lilactownanyone setup an Ion as a custom authorizer for API Gateway?#2019-01-0201:57Oliver GeorgeShould I see cast/dev in Cloudwatch Logs? Both cast/alert and cast/event come though okay. Perhaps "fine-grained logging to troubleshoot a problem during development" implies that it's not something which isn't intended to be logged after deployment.
https://docs.datomic.com/cloud/ions/ions-monitoring.html#dev#2019-01-0202:40lilactown> NOTE Configuring a destination for cast/dev when running in Datomic Cloud is currently not supported.#2019-01-0202:40lilactownSounds like#2019-01-0202:41lilactownNo, you currently can't see them#2019-01-0202:41lilactownI've been using event to do Dev loving because if that 😕#2019-01-0206:14johanatananyone else experiencing a problem where Datomic Cloud insists on version 0.1.23 of com.cognitect/s3-creds but that version isn't found in any maven repos when running locally. local run works fine with 0.1.22. if i try pushing and deploying 0.1.22, after getting the warning that my 0.1.22 was overridden by the cloud's 0.1.23 version, it gets stuck in ValidateService (times out after 5 minutes).#2019-01-0206:17johanatan[this is with a fresh project created in the last few days following the latest templates and advices per the Getting Started guide]#2019-01-0217:11johanatananyone?#2019-01-0217:14marshall@johanatan Are you using s3-creds directly?#2019-01-0217:14johanatannope#2019-01-0217:14johanatanit's one of the 8 or so deps that Datomic Cloud is adding#2019-01-0217:15marshallis your Ion doing something with another AWS lib of some sort? or are you just trying the basic ion tutorial?#2019-01-0217:15johanatannope, it's very basic#2019-01-0217:16johanatanhere's the ns form for my code:
(ns core
(:require [datomic.client.api :as d]
[datomic.client.api.async :as da]
[aleph.http :as http]
[manifold.deferred :as deferred]
[manifold.time :as mt]
[manifold.stream :as st]
[byte-streams :as bs]
[clojure.data.json :as json]
[clj-time.core :as t]
[clj-time.local :as l]
[clj-time.format :as f]
[clj-time.coerce :as c]
[com.rpl.specter :as s]))
#2019-01-0217:17marshallah. you said it gets stuck in validate service#2019-01-0217:17marshallyou mean the deploy step fails?#2019-01-0217:18marshalland eventually the codedeploy times out?#2019-01-0217:27marshall@johanatan https://forum.datomic.com/t/loadionsfailed-caused-by-stackoverflowerror-in-clj-antlr/747/3#2019-01-0217:48johanatan@marshall yes, that’s right #2019-01-0217:49marshallSolo or Production?#2019-01-0217:49marshallnot that it matters overly much, but you can set the Java thread stack size per my last comment on that ^ thread#2019-01-0217:50marshallI suspect you’re hitting a thread stack overflow given the largeish set of dependencies you listed there#2019-01-0217:50marshallI should also note that you can’t use the datomic async client in an ion#2019-01-0219:23johanatanSolo#2019-01-0219:23johanatanWhy no async?#2019-01-0219:24johanatanAlso, even if the thread size tweak fixes this, how can I get 0.1.23 locally? Do I need to add another repo?#2019-01-0219:25marshallthe deps mismatch is not related to the issue you’re hitting#2019-01-0219:30marshall@johanatan Did you look in your CloudFormation logs for your datomic stack to see the specific error responsible for the failure? i suspect it’s the stack overflow#2019-01-0220:13johanatanLet me check #2019-01-0220:16johanatanI have three stacks (two are nested under the first): datomic, datomic-Compute-XXX, and datomic-StorageXXX. all three are in CREATE_COMPLETE state and have never had an "UPDATE" attempted on them.#2019-01-0220:16johanatan@marshall ^#2019-01-0220:26grzmHow does one get the “basis t” value of a db returned by since? The value of :t that I see is equivalent to the :t of the current database.#2019-01-0220:33marshall@johanatan Sorry I meant CloudWatch logs#2019-01-0220:33marshalltypo#2019-01-0220:33marshallhttps://docs.datomic.com/cloud/operation/monitoring.html#searching-cloudwatch-logs#2019-01-0220:34marshallgo to the CloudWatch logs dashboard and find the log group named “datomic-<yourSystemName>”#2019-01-0220:34marshallthen you can search for “Exception”#2019-01-0221:22johanatan@marshall cool, thx!#2019-01-0300:50johanatan@marshall this is what is contained in the CloudWatch logs:
"Error": "Fault",
"CognitectAnomaliesMessage": "java.lang.AssertionError: Assert failed: cfg, compiling:(core.clj:25:1)"
#2019-01-0300:51johanatanline 25 is the last line of the following block:
(defonce system "datomic")
(defonce region "us-east-1")
(defonce cfg {:server-type :ion
:region region
:system system
:creds-profile "personal"
:endpoint (format "" system region)
:proxy-port 8182})
(defonce client (d/client cfg))
#2019-01-0304:59johanatantried inlining the cfg as follows and am still getting the same error (although there is no longer a binding named cfg):
(defonce client (d/client {:server-type :ion <== error points to this line
:region region
:system system
:creds-profile "personal"
:endpoint (format "" system region)
:proxy-port 8182}))
#2019-01-0305:04lilactowndoes this work locally for you?#2019-01-0305:04lilactowne.g. in a REPL, connecting to the system through the SOCKS proxy?#2019-01-0305:04johanatanyep, it works locally. i just tried without the cred-profile because that is needed for local only#2019-01-0305:04johanatanbut i just found that the ion-starter has a :query-group specified which i am missing#2019-01-0305:04johanatanhttps://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter.clj#L20#2019-01-0305:05johanatanperhaps that is the problem?#2019-01-0305:05lilactownyeah that’s the only real difference I can see between what you have and what my Ions code looks like#2019-01-0305:05johanatan:+1:#2019-01-0305:05johanatanok, i'll try it#2019-01-0305:16johanatansame error with :query-group#2019-01-0305:18lilactownit’s an assertion error?#2019-01-0305:18johanatanyep#2019-01-0305:19johanatan"Msg": ":datomic.cluster-node/-main failed: java.lang.AssertionError: Assert failed: cfg, compiling:(core.clj:17:1)",
"Ex": {
"Cause": "java.lang.AssertionError: Assert failed: cfg, compiling:(core.clj:17:1)",
"Via": [
{
"Type": "clojure.lang.ExceptionInfo",
"Message": "java.lang.AssertionError: Assert failed: cfg, compiling:(core.clj:17:1)",
"Data": {
"CognitectAnomaliesCategory": "CognitectAnomaliesFault",
"DatomicAnomaliesException": {
"Cause": "Assert failed: cfg",
"Via": [
{
"Type": "clojure.lang.Compiler$CompilerException",
"Message": "java.lang.AssertionError: Assert failed: cfg, compiling:(core.clj:17:1)",
"At": [
"clojure.lang.Compiler",
"load",
"Compiler.java",
7526
]
},
{
"Type": "java.lang.AssertionError",
"Message": "Assert failed: cfg",
"At": [
"datomic.client.impl.local$create_client",
"invokeStatic",
"local.clj",
97
]
}
],
"Trace": [
[
"datomic.client.impl.local$create_client",
"invokeStatic",
"local.clj",
97
],
[
"datomic.client.impl.local$create_client",
"invoke",
"local.clj",
94
],
[
"clojure.lang.Var",
"invoke",
"Var.java",
381
],
[
"datomic.client.api.impl$dynarun",
"invokeStatic",
"impl.clj",
19
],
...
#2019-01-0305:19johanatanwould it be a problem that this file has a -main defined?#2019-01-0305:19johanatan[i'm using that main to run locally from command line]#2019-01-0305:20lilactownoooo probably#2019-01-0305:20johanatanhmm, let me try removing it#2019-01-0305:21lilactownI’m not sure how ions get loaded, but I could see that mucking with it#2019-01-0305:22johanatanyea, me too 🙂#2019-01-0305:23johanatanbummer. same problem without the -main#2019-01-0305:23lilactownwould you be willing to post your source file?#2019-01-0305:24johanatanlet me see how much i can strip from it to still reproduce the problem#2019-01-0305:29johanatan#2019-01-0305:30johanatan{:allow [
;; lambda handlers
core/ion-func
]
:lambdas {:load-chains
{:fn core/ion-func
:description "A description."}
}
:app-name "datomic"}
#2019-01-0305:30johanatan{:paths ["src" "resources"]
:extra-paths ["resources"]
:deps
{clj-time {:mvn/version "0.15.0"}
com.rpl/specter {:mvn/version "1.1.2"}
aleph {:mvn/version "0.4.6"}
org.clojure/clojure {:mvn/version "1.9.0"}
com.datomic/ion {:mvn/version "0.9.28"}
com.datomic/client-cloud {:mvn/version "0.8.71"}
org.clojure/data.json {:mvn/version "0.2.6"}
com.cognitect/transit-java #:mvn{:version "0.8.311"}
com.datomic/client-api #:mvn{:version "0.8.12"}
org.msgpack/msgpack #:mvn{:version "0.6.10"},
com.cognitect/transit-clj #:mvn{:version "0.8.285"}
com.cognitect/s3-creds #:mvn{:version "0.1.22"}
com.amazonaws/aws-java-sdk-kms #:mvn{:version "1.11.349"}
com.amazonaws/aws-java-sdk-s3 #:mvn{:version "1.11.349"}}
:mvn/repos {"datomic-cloud" {:url ""}}
:aliases
{:dev {:extra-deps {com.datomic/ion-dev {:mvn/version "0.9.186"}}}}}
#2019-01-0305:30johanatan^^ that should be the entirety of it.#2019-01-0305:34lilactownhm, I wonder if it could be the fact that you’re creating the client when your code is first run#2019-01-0305:35lilactownin the Ions tutorial / example (which I pretty much copied), they define get-client:
(defonce get-client
;; "This function will return a local implementation of the client
;; interface when run on a Datomic compute node. If you want to call
;; locally, fill in the correct values in the map."
(memoize #(d/client {:server-type :ion
:region "us-west-2"
:system "datomic"
:query-group "datomic"
:endpoint ""
:proxy-port 8182})))
I made it a defonce for REPL-ing but it’s still a function, that has to be invoked when your Ion is first invoked#2019-01-0305:37lilactownI know there’s a bunch of spinning up and down that the Datomic system does when new code gets deployed. For example, a lot of times the first request I send after a deployment fails because it can’t connect to the database#2019-01-0305:52johanatanah yea. that could be it#2019-01-0306:01johanatanyep, that was it.#2019-01-0306:02johanatanthanks for your help!#2019-01-0306:02lilactownsure thing!#2019-01-0306:30johanatanbtw, the docs at: https://docs.datomic.com/cloud/ions/ions-reference.html don't mention the need to delay the client creation#2019-01-0306:30johanatan/ has the code I was trying to run initially#2019-01-0311:12stijn@johanatan this page mentions the specific error you got https://docs.datomic.com/cloud/troubleshooting.html#assert-failed#2019-01-0311:13stijnit changed recently, because previously you could do this, although it wasn't recommended, but now, with the preloading of the active databases, you can't do this anymore#2019-01-0320:18johanatanOh ok. It might be a good idea to update the rest of the documentation (linked to previously) so that people don’t continue going down this path. #2019-01-0314:27dmarjenburghHi, is there a way to pull :db/ident values (like enums). E.g. taking the tutorial example of colors and inventory items (https://docs.datomic.com/cloud/tutorial/assertion.html):
clojure
(d/pull db
{:selector [:inv/type :inv/size :inv/color]
:eid [:inv/sku "SKU-60"]})
; =>
; #:inv{:type #:db{:id 15617463160930376, :ident :shirt},
; :size #:db{:id 29304183903486023, :ident :xlarge},
; :color #:db{:id 32330039903125571, :ident :yellow}}
I would like to retrieve: #:inv{:type :shirt :size :xlarge :color :yellow} without transforming the query result. Is this possible?#2019-01-0314:43markbastianAnyone know if there are plans to bring the datomic cloud find-spec up to date with the on-prem version? For example, support for . and ... to return scalars and vectors?#2019-01-0315:29Jules WhiteI have a strange issue. The following rule works fine when invoked via the REPL, but fails when invoked via a Lambda in Datomic Cloud. However, if I deploy, invoke the rule via the REPL, and then invoke it via Datomic Cloud Lambda, it will work from then on when invoked via Lambda.
Code:
[(foo ?x ?y)
[(my.namespace/foo ?x ?y) [?q ?r]]]
Initial error when invoking from Datomic Cloud:
The following forms do not name predicates or fns: (my.namespace/foo)#2019-01-0317:24timgilbert@dmarjenburgh: basically no, to my knowledge, though you could use [{:inv/type [:db/ident]} {:inv/size [:db/ident]}] as your pull expression to elide the :db/id stuff out of there. One possible alternative is to just use keywords as your data types for enum values, though there are tradeoffs.#2019-01-0320:38dmarjenburghOk, thanks#2019-01-0514:54eoliphantit’s not gonna work, AFAIK with pull, as they’re just regular distinct entities as far as datomic is concerned, even though we group them together ‘mentally’. We do what you’re after with regular queries. Like the following returns all of our :action/.. enums
:where
[?a :db/ident ?i] ;;finds all :db/idents
[((comp #{"action"} namespace) ?i)]
#2019-01-0317:26timgilbertThere's a bit of background on the difference between how those approaches behave here, although this is in a datomic on-prem/peer context, not a cloud/client context:
http://docs.workframe.com/stillsuit/current/manual/#_stillsuit_enums#2019-01-0319:12idiomancyhey, I assume the answer to this question is "no, that's not a thing, you have to use a query", but is there any way to specify the combination of a key and a value as a unique identifier. so, only one user at a time (who I would like to have easy access to) will have ::role ::admin but many users might have ::role ::moderator. so I'd love to be able to use [::role ::admin] as an eid.#2019-01-0319:29favilawhat is the difference between what you want and making ::role a unique attribute?#2019-01-0319:36idiomancyAs in a db.unique/identifier? Well the fact that for certain values of ::role, multiple distinct entities can share the same value#2019-01-0319:37lilactownare you sure you only ever want one admin role? I don’t know your use case but almost everywhere I look that has a role-based permission system has the capability to add more than one administrator#2019-01-0319:38idiomancyHonestly role is the wrong semantics to signal. We have a smart contract that ensures the existence of one and only one admin. #2019-01-0319:38idiomancyBut i was trying to find a way to describe that which could be general purposed#2019-01-0319:39idiomancyBecause making ::admin true a unique identifier seems weird#2019-01-0319:40idiomancyTechnically there is one admin and one 'owner' where the only thing the owner can do is assign a new admin#2019-01-0319:40idiomancySo, its a unique value#2019-01-0319:41lilactowncould the owner and admin be the same person?#2019-01-0319:41idiomancyUnfortunately, yes#2019-01-0319:41lilactownsounds like the best way would be to use a bool value then IME. ::admin true ::owner true#2019-01-0319:42idiomancyYeah#2019-01-0319:42lilactownfor ease of querying#2019-01-0319:42idiomancyI think youre right#2019-01-0319:42idiomancyThats what I was kind of landing on myself#2019-01-0319:42idiomancyThanks!#2019-01-0319:44lilactownsure thing! 🙂#2019-01-0320:30johanatanhi, is there an env var i can check the presence for to determine if my ion is running in the cloud or locally?#2019-01-0320:30johanatan[or some other/better way to do this?]#2019-01-0320:43johanatani ended up going with:
(clojure.core/string? (System/getenv "AWS_LAMBDA_FUNCTION_NAME"))
(which should work fine)#2019-01-0320:45johanatani am getting an error when trying to query my solo datomic instance while a process is inserting data in the bkg:
1. Unhandled clojure.lang.ExceptionInfo
Datomic Client Exception
{:cognitect.anomalies/category :cognitect.anomalies/busy,
:http-result
{:status 429,
:headers
{"content-length" "9",
"server" "Jetty(9.3.7.v20160115)",
"date" "Thu, 03 Jan 2019 20:44:43 GMT",
"content-type" "text/plain"},
:body nil}}
is this because my solo server can't handle the load I'm trying to place on it?#2019-01-0320:45johanatando i need to change it to production?#2019-01-0320:51marshallhttps://docs.datomic.com/cloud/troubleshooting.html#busy
@johanatan You should either retry or potentially, yes, move up to production#2019-01-0320:52johanatan:+1: thx!#2019-01-0320:52johnjsolo its just a demo#2019-01-0320:52johanatanwhat's the easiest way to upgrade from solo to production?#2019-01-0320:52marshallnot sure i’d categorize it as a demo
lots of workloads are totally feasible on solo#2019-01-0320:52marshallbut if you need more than that - then yes, production#2019-01-0320:53marshall@johanatan https://docs.datomic.com/cloud/operation/upgrading.html#2019-01-0320:53johanatanthx#2019-01-0321:21johanatanfyi.. i did a simple distribution of my load (~20 reqs spaced 250 ms apart) and it fixed my "busy" issue:
(defn- distribute-load [funcs stagger]
(map-indexed (fn [idx itm] (mt/in (* idx stagger) itm)) funcs))
#2019-01-0321:21johanatanmt is manifold.time#2019-01-0321:35marshall👍#2019-01-0321:49johanatananyone know what's going on with this?
13:49 $ clojure -A:dev -m datomic.ion.dev '{:op :push :creds-profile "personal"}'
{:command-failed "{:op :push :creds-profile \"personal\"}",
:causes
({:message
"Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 98AED4A6FE429CBB; S3 Extended Request ID: LsWqYlVK8gDQAwmUPwh32CFjQp6B2UCos0IDDnNGzwnxokfkPJkeJLrw4wiurwi517UY42Rho8g=)",
:class AmazonS3Exception})}
#2019-01-0321:50johanatansomewhere i can look for additional diagnostics?#2019-01-0321:53marshall@johanatan try explicitly adding :region#2019-01-0321:54johanatansame result#2019-01-0321:55johanatan[and the one without :region has been working for me for the last week or so]#2019-01-0321:55marshallwhat changed?#2019-01-0321:55johanatani added a new lambda to ion-config.edn and tweaked the code a bit. nothing drastic#2019-01-0321:56marshalli’d check to be sure the ion-config.edn is formatted properly#2019-01-0321:56johanatanyea, that was my thought too. i've double checked. but i'll triple check it#2019-01-0321:58johanatanlooks good to me:
{:allow [
;; lambda handlers
core/load-chains
core/volatility-skews
]
:lambdas {:load-chains
{:fn core/load-chains
:description "Loads latest option chain data from td ameritrade into datomic."
:timeout-secs 900}
:volatility-skews
{:fn core/volatility-skews
:description "Returns any current volatility skew opportunities."
:timeout-secs 60}
}
:app-name "datomic"}
#2019-01-0321:58johanatanand the code's buffer is loading fine in CIDER#2019-01-0322:02marshallnot sure what else could be responsible if your creds profile file is correct and all#2019-01-0322:02johanatanhmm, maybe my creds have expired or something#2019-01-0322:02johanatani'll look into that#2019-01-0322:02marshallyou can submit an AWS support ticket with the Request ID and the extended request ID and they might be able to tell you what the actual error cause was#2019-01-0322:03marshallusing latest ion-dev and all that?#2019-01-0322:03johanatanit could be intermittent AWS issues (but that seems unlikely)#2019-01-0322:03marshallhttps://docs.datomic.com/cloud/releases.html#current#2019-01-0322:04johanatanyep, i have all of those versions#2019-01-0322:37johanatanthis works so the creds seem valid:
14:37 $ aws s3 --profile personal ls
2018-12-28 13:07:14 datomic-code-2025dbc4-e342-4b10-99d8-24ce8346fec1
2018-12-28 13:03:05 datomic-storagef7f305e7-ulwi6f7m5ipi-s3datomic-1xpzc6j152563
2017-05-02 19:20:12 numerai-data
#2019-01-0323:37johanatanare there additional diagnostic steps I can take? I’m not sure if I can call AWS support as this is just a personal playground account #2019-01-0323:52johnjIf I'm not wrong, you are entitled to cognitect support: https://support.cognitect.com/hc/en-us/requests/new#2019-01-0323:54johnjah solo doesn't have standard support https://www.datomic.com/pricing.html#2019-01-0400:07johanatanit has "developer forum" wherever that is? (I assume not here)?#2019-01-0400:07marshallhttp://Forum.datomic.com#2019-01-0401:54Dustin GetzIf cloud doesn’t officially support cross database query, but does expose raw index access, cannot I implement it myself?#2019-01-0415:28jaretHi All! We’re looking to add some community created Datomic Cloud/Ions examples to our documentation. If you have a project you’d like to share and a repository, or blog/video demoing Datomic Cloud/Ions we can link to please let us know. Feel free to DM me or send an e-mail to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>.#2019-01-0418:55grzmI’m getting a “Datafiable does not exist” error when including the cognitect.aws.client.api with Datomic cloud#2019-01-0418:55grzm#2019-01-0418:57grzmJust deployed a solo stack today using the latest and greatest versions featured on https://docs.datomic.com/cloud/releases.html#2019-01-0418:57marshallwhat version of clojure in your deps.edn?#2019-01-0418:58grzm#2019-01-0418:59grzm1.10.0#2019-01-0419:02lilactownwhen I last built + deployed my Ions, I tried to use 1.10 but was told that it was overridden and (AFAICT) my compute nodes are using 1.9 still#2019-01-0419:04lilactownit also looks like cognitect.aws.client.api depends explicitly on clojure.datafy, which was introduced in 1.10. so it is not compatible with 1.9#2019-01-0419:05lilactownhopefully once people are back from the holidays we’ll get a compute update to Clojure 1.10 😬#2019-01-0419:05grzmWell, we’re back from the holidays 🙂 And @marshall’s here to keep us company 😄#2019-01-0419:06grzmGuess I’ll stub back Amazonica. The code looks so nice using the aws client api.#2019-01-0419:08lilactownyeah. AFAICT the actual functionality of the aws-api library doesn’t depend on clojure.datafy#2019-01-0419:09lilactownso in an ideal world it would detect whether the Datafiable protocol was available and optionally extend the protocol#2019-01-0419:09marshallI believe I’ve used the aws-api from ions#2019-01-0419:09marshallhowever the current release does indeed use clojure 1.9#2019-01-0419:09marshallit will be moved to 1.10 on the next release#2019-01-0419:11cjsauerI may have spotted a small typo in the docs: https://docs.datomic.com/cloud/transactions/transaction-data-reference.html#Transaction
db-fn and db-fn-arg should potentially be tx-fn and tx-fn-arg.#2019-01-0419:11lilactownthe Datafiable bits were added in November 29th#2019-01-0419:13lilactownyou could probably clone the project and delete the Datafiable line and be good-to-go tbh#2019-01-0419:24grzm@lilactown good idea. I’ll give that a go.#2019-01-0421:56grzm@lilactown that worked just fine. Thanks!#2019-01-0421:56grzmhttps://github.com/Dept24c/aws-api/commit/967f0d639c61e32a39c2e6b2ce97aa64f735bcde#2019-01-0422:48timgilbertSay, does datomic on-prem with a DynamoDB back-end support encryption at rest?#2019-01-0422:49timgilbert(via the AWS stuff, eg https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/EncryptionAtRest.html)#2019-01-0422:57timgilbertAh, I now see that our current ddb tables are using it, so the answer is yes (although I'm not sure how actually useful that is, seems like it just protects us from somebody taking down amazon and stealing their hard drives)#2019-01-0423:39steveb8nIt’s most useful when your product needs to pass security review from your customers. If you build a SAAS product for business/enterprise customers, this will come up a lot#2019-01-0423:36steveb8nQ: I have an attribute in all my entities called :common/version which allows me to do app level data migration. I wonder if this is a bit of a contradiction of the “accretion only” design idea. what are the pros/cons of this idea that I am missing?#2019-01-0513:25dmarjenburgh@steveb8n Not exactly sure what you intend to do, but isn’t the version derivable from the transaction that last updated an entity?#2019-01-1616:10favilasure#2019-01-1616:10Ben Hammondit is possible to transact a ref to someone else's component entity#2019-01-1616:11Ben Hammondalthough frowned upon#2019-01-1616:11favilait's a constraint that datomic doesn't enforce#2019-01-1616:12favilaif you don't enforce it yourself retractEntity behavior may surprise you#2019-01-1616:13Dustin GetzYeah. so unique scalars aren’t refs so you can’t pull them backwards; unique refs are :many; so it’s just component refs that are :card/one#2019-01-1616:17Dustin GetzThank you#2019-01-1616:40favila@dustingetz I think card-many isComponent will also have reverse-ref card-one#2019-01-1616:41favilaif you have extra they will just be dropped#2019-01-1616:41favila(using d/entity or d/pull api)#2019-01-1617:57mssquick q re: https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/shutdown#2019-01-1617:58msswhat does will release Clojure resources mean specifically? kill the process with the peer? or just terminate things like the core async thread pool?#2019-01-1617:59favilaafaik it just calls shutdown-agents#2019-01-1617:59favilaso the agent thread pool is killed#2019-01-1617:59favilaunless you have another non-daemon thread running your process will probably die after that#2019-01-1618:00mssmakes sense. thanks for the clarification#2019-01-1618:00favilaI think (shutdown true) is mostly for java users of datomic#2019-01-1618:01favilawho don't know/care about the clojure system and just want all of it to shut off#2019-01-1618:56tjgGiven a datomic DB or schema, what do people use to generate an entity-relationship diagram? escherize/drytomic, or perhaps Hodur can be used this way?#2019-01-1619:40lilactownI tried hodur but it was a bit too opinionated/simple for my taste#2019-01-1619:41lilactownI tried it specifically because I wanted to get diagrams for free 😛#2019-01-1622:30dpsuttonsetting up schema at work. i was prototyping with :db/valueType ref but now i prefer to go to string. The docs say there is no way to do this. What should i do at this point? Must I come up with a new name for my attribute? Can i do some renaming shenanigans? This is all on our test db#2019-01-1622:31favilarename attribute to something else (or retract it's db/ident completely) then make a new attribute#2019-01-1622:33favilaonce you make an attribute you cannot change its type or remove it#2019-01-1622:34favilayou also cannot excise attributes#2019-01-1622:34dpsuttonwhat's the difference between retracting and excising#2019-01-1622:34favilaretracting adds a retraction datom#2019-01-1622:34favilaexcising deletes old datoms#2019-01-1622:36dpsuttoni see. thanks#2019-01-1710:49jaihindhreddyretracting facts is equivalent to asserting those facts are no longer true.
Excision is a painful surgery that changes the past (not without trace. Datoms about excisions are un-excisable and are retained)
#2019-01-1711:45jarppeI'm looking for a way to query the most recent transaction that has changed a specific entity. What would be the best way to achieve this?#2019-01-1711:53jarppewhen I make transactions, I add user id and other information to the tx, and then I'd like to show "Last changed by <user> at <time>" on frontend#2019-01-1712:00jaihindhreddy@jarppe Like this?
[:find ?user ?updated-at
:in $ ?e
:where [?e _ _ ?t]
[(max ?t) ?last-txn]
[?last-txn :user/id ?user]
[?last-txn :db/txInstant ?updated-at]]
#2019-01-1712:03jaihindhreddyBy "changed a specific entity", I hope you mean, changed an attribute of the entity directly, and not transitively changed something owned by an entity.#2019-01-1712:05jaihindhreddyIf you're looking at something like "changed an attribute on this entity, or another entity that this entity owns" for some definition of owns, take a look at this talk by the folks at Nubank.
EDIT: The above does not work. I was mistaken. Sorry for the noise.#2019-01-1712:13jarppeThanks @jaihindh.reddy, that's exactly what I'm looking for#2019-01-1712:28jaihindhreddyBTW, a side note, thanks a lot for the talk about lupapiste. Is the "state machines with guards" approach documented or available as a library somewhere?#2019-01-1713:51favilaDid you test this? I’m Pretty sure that “max ?t” is not doing what you think it is. You can’t perform aggregations in datalog where clauses#2019-01-1815:27jaihindhreddyWow. didn't test it. Thanks for clarifying this.#2019-01-1815:39favilayou can either introduce a nested query or you can process the results#2019-01-1712:13jarppeDo you have any idea what's the performance with (max ?t)?#2019-01-1712:27jaihindhreddyI just setup my ions recently. Am a Datomic neophyte. Never used at work or in anger.
So sorry, no idea 😄#2019-01-1713:23m_m_mI have a case: I have item and I have state of my item changed by a sort of events. For example state = amout of items then I have update on my state that the state is equal 7 (for example it was 10) cool...that update was with timestamp 1234, next I am getting next event which should be done earlyer so the state was 8 with timestamp 1233. Is it possible to put something in the past of my item? I would like to put 8 before 7 without changing 7 as my actual state value.#2019-01-1713:24m_m_mI understand that I can not remove anything from Datomic database (which it super cool) but can I add new states as a past of my actual state ? 🙂#2019-01-1713:48Ben Hammondit is only possible to assert new data#2019-01-1713:48Ben Hammondit is not possible to assert old data#2019-01-1713:50Ben Hammondit might be a clue that you need a model 'data insertion as-of` as a data item in you domain#2019-01-1713:50Ben Hammondrather than expecting to use the datomic :db/txInstant timestamp, which records the actual time the data was transacted into datomic#2019-01-1713:52Ben Hammond(I don't have a time machine - time only moves forwards for me. I assume that's the same for most people here)#2019-01-1715:50m_m_mIt is a little bit complicated. Those events are from third party API. It is possible that "cancel" event which is cancelling an offer will be delivered faster then "trade" event (last trade before "cancel"). At the end I have to have all of those "change state" events in a right order in my db because I have to render a list of active orders with their actual state. But at the end I can ask datomic to give me only actual orders with the good state asking for the "last" state from each of my items?#2019-01-1715:53Joe LaneYou need to model time as a first class attribute on your “trade”s. There are two different time models here. There is the logical time (datomic’s time) and then there is temporal time (information about your domain). This will allow you to add facts about things that happened in the past.#2019-01-1715:54Joe LaneDatomic’s logical time is extremely helpful for debugging, operations, and verifying why decisions were made.#2019-01-1715:55Joe LaneI’ve learned not to try and gather domain information from the mechanics of how datomic stores the information. You could consider them two different concerns.#2019-01-1716:00m_m_mGreat. Now I see. Thank you @U0CJ19XAM and @U793EL04V!#2019-01-1720:12lwhortonthis helped me wrap my head around it https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2019-01-1803:20joshkhi've created a blog post about how I handle routing in the context of Datomic Ions and AWS API Gateway. for anyone interested, am i missing something? https://medium.com/@rodeorockstar/datomic-ions-aws-api-gateway-and-routing-d20a1bb086dd#2019-01-1803:21joshkhit's meant to pickup where the Datomic Ions Tutorial stops#2019-01-1807:46fdserrhi there, is this very bad?
WARNING: requiring-resolve already refers to: #'clojure.core/requiring-reso
lve in namespace: datomic.common, being replaced by: #'datomic.common/requi
ring-resolve
and how can I resolve the conflict (using deps, not lein)
TIA#2019-01-1814:02Alex Miller (Clojure team)no, it’s not bad (that’s why it’s a warning)#2019-01-1814:03Alex Miller (Clojure team)I expect a newer version of something on the datomic side probably fixes it, but I’d defer to someone from the datomic team to verify that#2019-01-1816:54johnjI think requiring-resolve was introduced in 1.10, hence the name clash#2019-01-1817:25Alex Miller (Clojure team)yes, but name clashes are quite intentionally not a bug#2019-01-1817:26Alex Miller (Clojure team)so nothing is broken here, but newer versions can or do either silence the warning (by intentionally excluding it) or by switching to the version now in Clojure#2019-01-1817:27Alex Miller (Clojure team)(I’m not sure which of those potential actions has already been taken)#2019-01-1819:09jjfineis there a logical/performance difference between these two:
[(missing? $ ?foo :bar)]
(not [?foo :bar])
#2019-01-1819:54mdhaneyWith Datomic Cloud, if I need to fail/abort a transaction (i.e. validation failed) within a transaction function, can I just throw an exception? That's how it works with on-prem, but the docs for Cloud don't mention anything about aborting transactions from within a transaction function.#2019-01-1820:48Joe Lane@mdhaney yup#2019-01-1823:02apseyIs there an official docker image for the Datomic Transactor from Cognitect?
Has anyone used this one? https://hub.docker.com/r/pointslope/datomic-pro-starter/dockerfile/
Use case is: running transactors inside kubernetes to avoid running standby transactors (half the cost without the ec2 boot time) and better embrace resource usage within kubernetes (having less idle resources).
cc @marshall#2019-01-2113:48marshallThere is not an official docker image.#2019-01-1904:23dpsuttonin a query, i want to find things by year and month and i plan to pass this to frequencies. here ?start is an instant. Can someone help me unify ?date-string with the string concatenation of the year and month? I'm stumbling for some reason
(d/q '[:find ?date-string
:in $ ?encounter
:where
[?encounter :fhir.Encounter/period ?period]
[?period :fhir.Period/start ?start]
[?date-string (str (.getYear ?start) "-" (.getMonth ?start))]]
db encounter-id)
#2019-01-1904:29dpsuttonif anyone has the same problem, its because i was doing way too much work in one unification clause. split the get years and string concatenation and bob's your uncle#2019-01-1905:23favilaYou have the parts of the clause backwards#2019-01-1905:24favila
[(str (.getYear ?start) "-" (.getMonth ?start)) ?date-string]
#2019-01-1905:26favilaAlso you will need a :with if you want a count of dates. (Remember the results are a set unless you use with)#2019-01-1905:27favilaOr you can do this instead:#2019-01-1905:27favila:find ?date-string (count ?encounter-id)#2019-01-1905:27favilaNow you don’t need frequencies#2019-01-2023:30okocimI don’t quite understand the tradeoffs between creating enumerations as :db/idents then accessing them as attribute-refs vs. creating attributes of :db.valueType/keyword. I a blanket statement in the docs to model enumerations as refs (i.e. the first way I’m talking about), but
I don’t really understand any tradeoffs that this might create. I was wondering if anyone might know when using idents would be better than using keywords or vice-versa. Or, if someone could point me at some further reading. At the end of the day, I’ll stick to the published advice, but I’d like to be able to reason more about my schema choices.
Typical of enumerations, these are low-cardinality attributes that will be on many, many entities.#2019-01-2101:48timhttps://forum.datomic.com/t/enums-vs-keywords/356#2019-01-2122:17johnjDatomic is a slow DB use mostly for low load internal apps, using ident for enums improves performance by the way of reducing memory and storage use.#2019-01-2122:44okocimThanks for the reading material. The discussion makes sense. I didn't notice any appreciable difference in perf between the two approaches (take that with a grain of salt, it was for my specific use case), so I ended up going with the keywords because I plan on adding more values in the future, and don't see the need to require a schema change to do so in my case.
Still, I do appreciate the help that folks have given me to gain better understanding. #2019-01-2122:53tim> using ident for enums improves performance by the way of reducing memory and storage use.
I don’t see how. Idents still store dbids per entry.#2019-01-2122:58johnj@UF1LL8Y95 https://docs.datomic.com/cloud/best.html#idents-for-enumerated-types#2019-01-2123:02timYeah I read that, but how is storing a keyword any different memory wise than storing a pointer? you still have to store something per entry. That was the question I was posing in the original post, but no one from the datomic team bothered to respond.#2019-01-2123:21tim@U4R5K5M0A it’s probably worth reading the last post by anton here: https://groups.google.com/forum/#!topic/datomic/KI-kbOsQQbU#2019-01-2123:45johnjfair, this needs more info from them#2019-01-2123:48johnjI don't think they answer stuff like this in detail to encourage users to buy support.#2019-01-2123:48johnjfrom what I have seen#2019-01-2214:27marshallall idents are stored in memory#2019-01-2214:28marshallin particular#2019-01-2214:29marshall“Idents are designed to be extremely fast and always available. All idents associated with a database are stored in memory in every Datomic transactor and peer.
”#2019-01-2214:29marshallhttps://docs.datomic.com/on-prem/identity.html#idents#2019-01-2214:29marshallif you’re using Cloud, all nodes in your compute groups and query groups (instead of transactor and peer)#2019-01-2216:06tim@U05120CBV fair enough. I’m not sure how different it would be performance wise when you consider the jvm hotspot internals against cached data. I think it’s fair to say enums are optimal, but the question was really how much so… if it’s a nominal difference then enums don’t look too good when they require schema changes for additions and lack cardinality support.#2019-01-2216:07marshallthere are tradeoffs of both approaches; i suspect the perf difference is quite minimal; sometimes it’s nice to have your “enumerated options” live in the same place as your data model definitions (schema); sometimes it’s nice to be able to build up more complex data structures around/about your enums (i.e. more attrs on them);
on the flip side, not dealing with refs can be more straightforward (i.e. just have a kw value)#2019-01-2212:27jeroenvandijkA difference between enumerations and keywords are that the enumeration keyword is only saved once (in the schema), where as a keyword is saved on every transaction (and giving you the option of freeform input). So if you have a limited set of keywords and you know them in advance you should use enums for better read, write and storage behaviour (so pointer to keyword in the schema = 1 datom VS pointer to keyword and keyword = 2 datoms)#2019-01-2212:30jeroenvandijk@lockdown- I'm curious where you got this from Datomic is a slow DB use mostly for low load internal apps ?#2019-01-2213:19mishayeah, slow and low need something to be contrasted with. And why internal?#2019-01-2213:59tim@jeroenvandijk > a keyword is saved on every transaction
so is the entity-id for each enum reference pointer. the question, as I understand it from antons post, is not about disk space, as we know enums means we store less data. it’s not about transactions, because we know there’s a write either way. it’s about reads, memory footprint and performance. I could care less about storing the extra data to disk.#2019-01-2214:02jeroenvandijkI think the difference comes down to:
1. the enum is indexed, the keyword is not (by default)
2. the enum is one datom read, the keyword is two
I don't know how this translate into your specific performance numbers, but enum should always be better#2019-01-2214:10anderswhat is the suggested strategy for backing up dbs using dynamodb as storage? is enabling backups of dynamodb tables a safe route or should i rather use the datomic cli?#2019-01-2214:31marshallDatomic On-Prem or Cloud?#2019-01-2215:38anderson-prem#2019-01-2215:44marshallYou should use datomic backup#2019-01-2215:45marshallhttps://docs.datomic.com/on-prem/backup.html#2019-01-2215:45marshallddb backup will not work: https://docs.datomic.com/on-prem/ha.html#use-datomic-backup#2019-01-2215:45marshall“Replication or backup of eventually consistent storages cannot (by definition) make consistent copies and is not suitable for disaster recovery.
”#2019-01-2216:00andersthanks 🙂#2019-01-2312:12Ben Hammondis there a nice way to pass sample size into a datalog query as a parameter?
(d/q '[:find (sample ?sample-size ?eid) . :in $ ?sample-size :where [?eid :organisation/id]]
db
3)
Execution error (ClassCastException) at datomic.aggregation/sample (aggregation.clj:63).
class clojure.lang.Symbol cannot be cast to class java.lang.Number (clojure.lang.Symbol is in unnamed module of loader 'app'; java.lang.Number is in module java.base of loader 'bootstrap')
has problems with variable binding#2019-01-2312:20Ben HammondI can do something like
((fn [sample-size]
(d/q {:find [(list 'sample sample-size '?eid) '.]
:in '[$]
:where '[[?eid :organisation/id]]}
db))
3)
=> [17592186045475 17592186045480 17592186045485]
but seems like a pretty ugly solution#2019-01-2312:25Ben HammondI guess
((fn [sample-size]
(take sample-size
(shuffle
(d/q '[:find [?eid ...] :in $ :where [?eid :organisation/id]] db))))
3)
=> (17592186045484 17592186045483 17592186045485)
is my best bet ..?#2019-01-2312:30Ben Hammondabstracted as
(defn sampled-query
"return no more than n items from datomic query, shuffled randomly"
[n & qargs]
(take n
(shuffle (apply d/q qargs))))#2019-01-2314:26favilaAfaik each item in find can only use one bound var and each bound var can only be used once in find#2019-01-2314:27favilaSo for eg you can’t put a pull expression in a binding then do (pull ?eid ?pull-expr) (violates first rule)#2019-01-2314:28favilaNor can you do :find ?eid (pull ?eid [:my-attr]) (violates second rule)#2019-01-2314:29favilaAnd the error message you get will be utterly mysterious#2019-01-2314:29favilaSo you can’t paramaterize sample size#2019-01-2312:47souenzzo@ben.hammond try
(d/q '[:find (sample sample-size ?eid) .
:in $ sample-size
:where [?eid :organisation/id]]
db 3)
#2019-01-2312:48Ben Hammond(d/q '[:find (sample sample-size ?eid) .
:in $ sample-size
:where [?eid :organisation/id]]
db 3)
Execution error (ClassCastException) at datomic.aggregation/sample (aggregation.clj:63).
class clojure.lang.Symbol cannot be cast to class java.lang.Number (clojure.lang.Symbol is in unnamed module of loader 'app'; java.lang.Number is in module java.base of loader 'bootstrap')#2019-01-2312:48Ben Hammondsample wants a number
we've only given it a symbol#2019-01-2319:51spiedenhmm, i have a confounding situation where pull isn’t behaving as i expect. shouldn’t i see the same two :demux/id values in both situations below?
(q '{:find [?did]
:where [[?f :flowcell/id "HW27TBBXX"]
[?d :demux/flowcells ?f]
[?d :demux/id ?did]]})
=> #{["demux-id-two"] ["demux-id"]}
(pull [{:demux/_flowcells [:demux/id]}]
[:flowcell/id "HW27TBBXX"])
=> #:demux{:_flowcells #:demux{:id "demux-id-two"}}
(`q` and pull are just partial applications of the fns from datomic.api)#2019-01-2320:38favila:demux/flowcells is isComponent cardinality-one?#2019-01-2320:39favilaisComponent card1 reverse-refs are card-one#2019-01-2320:39favilaif this is a data shape you expect (multiple "demux"es sharing the same "flowcell" then :demux/flowcells should not be isComponent=true#2019-01-2320:40favilaif you retractEntity a demux entity the flowcell it points to will also be retracted#2019-01-2323:18spieden@U09R86PA4 that was it! thanks. looks like i can’t alter the schema to make flowcell no longer a component, but i need to rethink this data model now anyway.#2019-01-2323:18favilareally? that should be possible#2019-01-2323:19favilahttps://docs.datomic.com/cloud/schema/schema-change.html#sec-3#2019-01-2323:19favilawhy do you think you can't alter this schema?#2019-01-2323:20favilahere for on-prem: https://docs.datomic.com/on-prem/schema.html#altering-component-attribute#2019-01-2417:13spiedeni get: {:errors ({:db/error :db.error/incompatible-schema-install,
:entity :demux/flowcells,
:attribute :db/isComponent,
:was true,
:requested false}),
:db/error :db.error/invalid-install-attribute}#2019-01-2417:14spieden(on prem)#2019-01-2417:14spiedeni think i’m actually going to keep it as a component, though, and create new flowcell entities each time#2019-01-2417:17spiedeneh, this would be a major change actually with a big cascade#2019-01-2417:27spiedenoh i see, i’m just doing it wrong#2019-01-2417:27spiedenit wants a retraction instead of an assert false#2019-01-2417:27spiedenthanks!#2019-01-2419:33spiedenhmm, actually no the docs say it should work the way i’m trying#2019-01-2419:44spiedenseems like this is the first place where i can’t just :db.install/_attribute over what i have but need to detect update vs create and do :db.alter/_attribute instead#2019-01-2419:52spiedenah nevermind. i just switched to using neither and seems good#2019-01-2320:32kennyWhy does an Ion parameter update request the creation of new physical resources?#2019-01-2408:50henrikThe logic is managed by CloudFormation. Certain parameters can't be updated in place, but mandates construction of a new resource. It has to do with AWS rather than Ions itself.#2019-01-2321:31lilactowndo I need to do a restart after updating the IAM policy attached to my compute node(s)?#2019-01-2321:43lilactownwelp, it looks like after I re-deployed it works now so… I guess so?#2019-01-2321:49kennyI had invalid EDN in my Ion parameters and I got an exception that looks like this:
"Msg": "LoadIonsFailed",
"Ex": {
"Cause": "EOF while reading",
"Via": [
{
"Type": "clojure.lang.Compiler$CompilerException",
"Message": "java.lang.RuntimeException: EOF while reading, compiling:(config.clj:10:14)",
"At": [
"clojure.lang.Compiler$InvokeExpr",
"eval",
"Compiler.java",
3700
]
},
It would've been great if I got an error saying that ion/get-env failed due to an EOF error.#2019-01-2323:40okocimis it possible to make use of a src-var in an aggregator, or is that just for datomic to know which source to use for passing in the coll to the aggregator? I’m trying to write a “best” aggregator that takes in a collection of entity ids, pulls in some further attributes from those entities, calculates a composite score, and returns the best composite.
Can I access the database specified by the src-var passed into the aggregator, or is what I’m trying to do only possible in-memory on the client side?#2019-01-2323:45okocim(d/q '[:find ?i (offer.calcs/best-composite $ ?o)
:where
[?s :store/id "demo-customer-shop"]
[?q :quote/store ?s]
[?q :quote/product-suite ?i]
[?q :quote/offer ?o]]
(db/latest-db))
;; here, :quote/offer is a composite ref with cardinality many
#2019-01-2323:45okocimsomething like that#2019-01-2402:01tjgI'm connecting to someone's Datomic On-Prem DBs, backed by DynamoDB. It's very slow to connect, and some tiny queries seem to eat RAM & never complete. Any ideas?
(My current theory: the Peer is filling its cache with way too many things.)
;; Tested under:
;; [com.datomic/datomic-pro "0.9.5561"]
;; [com.datomic/datomic-pro "0.9.5786"]
;; "Elapsed time: 132848.045478 msecs"
(defonce ^:private db-prod
(-> "datomic:ddb://..." d/connect d/db time))
;; "Elapsed time: 37830.972005 msecs"
(defonce ^:private db-dev
(-> "datomic:ddb://..." d/connect d/db time))
;; Despite commenting out `sample` & `count`, query still fails on db-prod.
(defn request-that-eats-memory [db]
(d/q '[:find [(rand 1 ?e) #_(sample 1 ?v) #_(count ?e)]
:where [?e :foo/bar ?v]]
db))
;; :foo/bar has only 5 entities in db-dev.
;; "Elapsed time: 2263.876834 msecs"
(time (request-that-eats-memory db-dev))
;; Consumes RAM at the rate of 70 MB/min.
;; Runs a few minutes before I abort.
(time (request-that-eats-memory db-prod))
#2019-01-2404:28favilaMaybe A long time since last reindex? Do transactor logs complain about indexing failures?#2019-01-2404:31favilaThinking out loud here, if index is very old then the log data in Txor might be big, leading to slow conn times as the peer got the log and slow queries because the unindexed portion of data is so big#2019-01-2413:48marshall@U050TF6A1 I agree with Francis here - long connection times usually indicate an issue with log tail size; do you have any exceptions in your transactor logs recently?#2019-01-2413:49marshallanother possibility is vastly underprovisioned storage#2019-01-2413:51tjgThanks fellows, I'll get access to their transactor logs to check any complaints...#2019-01-2408:14okocimI was wondering if anyone has seen this before:
* I have a bunch of rules defined (example below)
* Rules 1, 3, 4, 6, & 7 all work fine
* Rules 2, 5, & 8 all fail
* The failing rules contain ‘fn-expr(s)’
* I am running on datomic cloud
* ALL rules work when running the query through the bastion (i.e. locallyi at the repl)
* I get the following exception message when running through API Gatway (i.e. in my environment):
clojure.lang.ExceptionInfo:
:db.error\/invalid-rule-fn
The following forms do not name predicates or fns:
(* * -)
{:cognitect.anomalies/category
:cognitect.anomalies/incorrect,
:cognitect.anomalies/message
"The following forms do not name predicates or fns: (* * -)",
:symbols (* * -)
:db\/error :db.error\/invalid-rule-fn}
* I have clojure.core/* and clojure.core/- in the :allow key in my ion-config
* I tried using the fully qualified fn name in the rule (No luck either)
* The attribute_groups example in day of datomic cloud appears to be doing the same thing.
(https://github.com/cognitect-labs/day-of-datomic-cloud/blob/229b069cb6aff4e274d7d1a9dcddf7fc72dd89ee/tutorial/attribute_groups.clj#L28)
MY RULES BELOW:
(def sort-calcs
'[;; WORKS
[(rule-1 [?p] ?value)
[?p :q.p/t :my-val]
[?p :m.p/p ?value]]
;; FAILS
[(rule-2 [?v ?p] ?value)
[?v :f/m ?ms]
[?p :q.p/t :my-val]
[?p :m.p/rp ?r]
[?p :m.p/p ?pa]
[?p :q.p/t ?tl]
[(* ?r 0.01 ?ms) ?ra]
[(* ?tl ?pa) ?tot]
[(- ?tot ?ra) ?value]]
;; WORKS
[(rule-3 [?p] ?value)
[?p :q.p/t :my-val-2]
[?p :m.p/p ?value]]
;; WORKS
[(rule-4 [?p] ?value)
[?p :q.p/t :my-val-2]
[?p :m.p/r ?value]]
;; FAILS
[(rule-5 [?v ?p] ?value)
[?v :f/m ?ms]
[?p :q.p/t :my-val]
[?p :m.p/rp ?r]
[(* ?ms ?r 0.01) ?value]]
;; WORKS
[(rule-6 [?v] ?value)
[?v :f/sp ?value]]
;; WORKS
[[rule-7 [?v] ?value]
[?v :v/dis ?value]]
;; FAILS
[(rule-8 [?v ?p] ?value)
[?v :f/sp ?pr]
[?p :q.p/t :my-val-2]
[?p :m.p/ma ?a]
[(- ?a ?pr) ?value]]])
I could use some guidance on what to do next.#2019-01-2408:22okocimI posted this in the forum too, sorry if that feels spammy; I’m still feeling my way for what to put where… :shrug:#2019-01-2415:06lilactownmy attempts to reach S3 in my Ions succeeds for a couple hours after deploying, but if I come back hours later, fails completely#2019-01-2415:07lilactowndoing another (unrelated) deploy then brings up it’s ability to access S3 again#2019-01-2415:09lilactownwhen I execute the lambda for my Ion, I get this error:
{
"errorMessage": "Cannot open <nil> as a Reader.",
"errorType": "datomic.ion.lambda.handler.exceptions.Incorrect",
"stackTrace": [
"datomic.ion.lambda.handler$throw_anomaly.invokeStatic(handler.clj:24)",
"datomic.ion.lambda.handler$throw_anomaly.invoke(handler.clj:20)",
"datomic.ion.lambda.handler.Handler.on_anomaly(handler.clj:139)",
"datomic.ion.lambda.handler.Handler.handle_request(handler.clj:155)",
"datomic.ion.lambda.handler$fn__4075$G__4011__4080.invoke(handler.clj:70)",
"datomic.ion.lambda.handler$fn__4075$G__4010__4086.invoke(handler.clj:70)",
"clojure.lang.Var.invoke(Var.java:396)",
"datomic.ion.lambda.handler.Thunk.handleRequest(Thunk.java:35)"
]
}
#2019-01-2415:13marshall@lilactown do you have some kind of long-lived connection in your S3 ion?#2019-01-2415:14lilactownOK, I could be dense but if this:
(def s3 (aws/client {:api :s3}))
creates a long-lived connection, then yes#2019-01-2415:15lilactownwhich would explain a lot tbh#2019-01-2415:16lilactownI’m using cognitect’s aws-api#2019-01-2415:17marshallyou probably don’t want to do that in a def directly#2019-01-2415:17marshallyou’ll want something like a memoized “get client” function#2019-01-2415:17lilactownthat makes sense! I didn’t realize it was actually creating a connection#2019-01-2415:17marshalli.e. sort of how the ion starter project handles datomic connections https://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter.clj#L13#2019-01-2415:18marshallthat error looks like trying to do something with a closed connection; which would make sense after it sits for a while#2019-01-2415:18marshalland the redeploy of another ion would cycle the process#2019-01-2415:19marshallcausing your namespace-loading side-effect of creating the client again#2019-01-2415:19lilactownmystery solved 😄#2019-01-2415:19lilactownI’m a bit naive about how memoize actually works; if my connection drops, my assumption is that it would not gracefully reconnect but throw an error#2019-01-2415:19marshallyou dont really even need it to be memoized#2019-01-2415:19marshallyou can just have a ‘get client’ function#2019-01-2415:19marshallthe key is not to create the client as a side effect of ns loading (via def)#2019-01-2415:19marshallbut instead create it (or refresh it) when you invoke#2019-01-2415:20marshalllike this: https://github.com/pedestal/pedestal-ions-sample/blob/master/src/ion_sample/service.clj#L45#2019-01-2415:22lilactownshould I bother trying to avoid calling (aws/client {:api :s3}) on each invocation?#2019-01-2415:28marshalli dont’ know exactly what the overhead is creating the client for that library; if it is cheap i wouldnt worry about it; if it isnt you could put it in a memoized function#2019-01-2415:29lilactownI’m wondering that as well 🙂 I didn’t understand that it had some sort of long-lived connection#2019-01-2415:29lilactownwhich begs the question how I should clean it up#2019-01-2415:30lilactownif I’m opening a new connection to S3 on every invocation, and those connections are just lying around, I’m afraid I’ll end up taking up a ton of resources after awhile#2019-01-2418:18marshallnot if it goes out of scope#2019-01-2417:13adammillerCurious if anyone has had experience with utilizing Ions in a ring style app with CORS enabled? My issue is related to using the binary media type */* as the Ions tutorial recommends along with the CORS setup in API Gateway as apparently they don't play well together. If both are enabled, the preflight OPTIONS request generates an internal server error. Removing the */* binary type fixes the preflight request but then the body of all operations are returned base64 encoded. Any suggestions on what the right solution is to this?#2019-01-2417:22okocim@adammiller I had the same problem. I landed on setting up a single API gateway proxy ion endpoint instead of one per ion. Then I wrote a universal router that will handle the OPTIONS requests and CORS details.
Here is an article written by Joshua Heimbach a few days ago that talks a little more about this approach. Mine is slightly different in detail, but conceptually the same. (I am using a different router)
https://medium.com/@rodeorockstar/datomic-ions-aws-api-gateway-and-routing-d20a1bb086dd#2019-01-2417:27adammillerYeah, I'm already using one endpoint to route to my lambda which is served as ring app (basically) but I think handling cors at the app layer has some downsides 1) would be cost as you will be invoking lambdas for preflight requests (not huge deal), 2) Not sure it's possible to use amazon security this way, again not totally positive on this, just what I've found searching this problem where others talked about having the app layer handle cors.#2019-01-2417:32okocimOk, yeah sorry I didn’t catch that you were on a single proxy ion from your description. I wasn’t happy with those tradeoffs either, but I decided to defer that issue for a little while because I had to move on to other ones 😅. If you come up with a solution that you like, I’d appreciate it if you share.#2019-01-2417:34adammillerI definitely will, thanks for your input! I've been going back and forth on making those concessions myself as I can't spend much more time on this!#2019-01-2418:51adammiller@okocim I found the answer (after a lot of searching). You have to run the following commands (apparently no way to change this in the Console):
aws apigateway update-integration \
--rest-api-id <api-id> \
--resource-id <resource-id> \
--http-method OPTIONS \
--patch-operations op='replace',path='/contentHandling',value='CONVERT_TO_TEXT'
aws apigateway update-integration-response \
--rest-api-id <api-id> \
--resource-id <resource-id> \
--http-method OPTIONS \
--status-code 200 \
--patch-operations op='replace',path='/contentHandling',value='CONVERT_TO_TEXT'#2019-01-2418:52adammillerWould probably be nice for this to be in the Ions documentation somewhere as I'm guessing it will be a common problem for anyone who decides to host a full webapp (or api) inside Ions.#2019-01-2418:52adammillerdocumentation related to setting up CORS, that is.#2019-01-2418:56lilactownfor now sounds like a good blog post 😄#2019-01-2419:05adammillerGood idea, I'll try to write one up....if nothing else I'll know where to find it next time I run into this!#2019-01-2419:01okocim@adammiller Thanks! That’ll allow me to do one of my favorite things: delete some code 🙂#2019-01-2504:49codelimnerIs there a way to turn off datomic.process-monitor info logs from the repl?#2019-01-2504:50codelimnerI get plenty of these:#2019-01-2504:50codelimnerINFO datomic.process-monitor - {:MetricsReport {:lo 1, :hi 1, :sum 1, :count 1}, :AvailableMB 5970.0, :ObjectCacheCount 0, :event :metrics, :pid 18037, :tid 127}#2019-01-2514:27marshallThe settings for metrics reporting are all configured with your logback.xml file#2019-01-2514:28marshallhttps://docs.datomic.com/on-prem/configuring-logging.html#2019-01-2505:43brycecovertAre there any updated recommendations on how to sorting/pagination on large datasets? Most of the posts I’ve seen about this are 5 years old by now. My approach has been to do two queries. The first applies filters, and pulls just the ids and sort field. I sort in-process, and then do another query to fetch the collection of desired entities by id. This is still pretty slow (500ms for 30k entities).#2019-01-2505:44favilaIf there is a single pretty-selective attribute you can use, you can use it with index-range to pull subsets then feed into queries#2019-01-2505:45favilaAnother alternative is to use a secondary database as an index, either polling or watching the tx queue to update#2019-01-2505:48brycecovertInteresting. What do you mean by pretty-selective attribute? Could you give an example?#2019-01-2507:13favilaIt’s usually the first clause in your :where#2019-01-2507:14favilaSuppose you want all things posted on a certain day and also a bunch of other stuff#2019-01-2507:15favilaSo your first :where clauses assert that the thing is in the date range (because that is most selective) then a bunch of other clauses check other stuff before finally deciding if it’s in the result set#2019-01-2507:16favilaBut there’s 100000 things in that date range and you only need the first 10 that match#2019-01-2507:17favilaSo you don’t want those first few clauses to look at everything just to get extra results you won’t use#2019-01-2507:19favilaSo you can either divide up the range in the query itself, or you can use d/datoms or d/index-seek to lazily fetch a subset of that range and feed those entity ids to the query (to test the other stuff) as input#2019-01-2507:19favilaIf you get less than your desired limit in results, advance the range then repeat#2019-01-2507:21favilaThe important thing is that this attribute test/range must by itself ensure that a thing may or may not be in the result set. Otherwise you may advance the range and end up with repeated results#2019-01-2507:22favilaIe the attribute test if subsetted must produce non-overlapping result sets#2019-01-2507:33brycecovertThanks, that’s helpful. 🙂#2019-01-2518:40tony.kaySo, I’ve been working with on-prem Datomic for a while now, but now I have a client that is using the client API…with on-prem my testing is super easy since I can leverage datomock to make a db with sample data in it, transact/query/etc, and see the result.
With client I have an external server dependency…how are others testing against that?#2019-01-2519:43Lennart BuitI started using Datomic client memdb #2019-01-2519:44msshey all, playing around locally with datomic 0.9.5385 using the dev storage protocol. trying to transact a schema, which looks something like the following:
(def my-schema [{:db/ident :user/id
:db/valueType :db.type/uuid
:db/cardinality :db.cardinality/one}
{:db/ident :user/email
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
{:db/ident :user/first-name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
{:db/ident :user/last-name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}])
when I call (d/transact my-conn my-schema) I get an error ":db.error/entity-missing-db-id Missing :db/id". I was under the impression that datomic assigned db/ids automatically once an entity was transacted. is that not the case?#2019-01-2519:44Lennart Buithttps://github.com/ComputeSoftware/datomic-client-memdb#2019-01-2519:48Lennart BuitIt suits me in sort of small-scoped integration testing #2019-01-2519:50Lennart BuitNot sure how commonly used it is, or if there is anything better, but it at least works for me#2019-01-2519:45msswas following the docs here fwiw: https://docs.datomic.com/on-prem/getting-started/transact-schema.html#2019-01-2519:46marshall@mss what version of datomic?#2019-01-2519:46mssdatomic-pro 0.9.5385 w the dev storage protocol#2019-01-2519:47marshallthe implicit db/id was introduced in 5530#2019-01-2519:47mssah I see. well that explains that 😂#2019-01-2519:47mssappreciate the help#2019-01-2519:47marshallhttps://docs.datomic.com/on-prem/transactions.html#temporary-ids#2019-01-2519:47marshallnp#2019-01-2521:11idiomancywhats the difference between Datomic Cloud and Utility Bastion per https://aws.amazon.com/marketplace/pp/prodview-otb76awcrb7aa?ref=vdr_rf ?
Or, more succintly, wth is Utility Bastion?#2019-01-2521:22idiomancyI'm trying to make some purchasing decisions today lol#2019-01-2521:36osiis it https://docs.datomic.com/cloud/operation/bastion.html ?#2019-01-2521:44lilactownthe bastion is a node that you can connect to to give you access to the Datomic VPC in order to do local development#2019-01-2521:46lilactownnormally, the only things that can talk to the Datomic DB have to be within the same VPC#2019-01-2521:46lilactownmainly for security reason#2019-01-2521:48lilactownthe bastion is an EC2 instance that you can connect to from your computer so that you can access the Datomic Cloud system without pushing your code to AWS#2019-01-2521:48idiomancyAhh I see#2019-01-2521:48idiomancyGotcha, that makes sense#2019-01-2600:35lilactownso I got a spike of traffic to my solo system and it’s just not working#2019-01-2600:35lilactownI’m trying to throw up a CDN, but in the meantime how do I troubleshoot and bring it back online?#2019-01-2601:08lilactownrestarting the compute instance seemed to do the trick#2019-01-2718:19ronnyThe function will be called with the current date (Date.). On the repl I get 2 entries but on the server I get zero. This is running on a datomic-cloud solo instance. Could anybody tell me what I am doing wrong?#2019-01-2718:19ronnyThe function will be called with the current date (Date.). On the repl I get 2 entries but on the server I get zero. This is running on a datomic-cloud solo instance. Could anybody tell me what I am doing wrong?#2019-01-2718:27ronnyOn both is the same version running (deployed)#2019-01-2808:16ronnydate is an instant#2019-01-2811:09ronnyI found the problem, but I have no idea why this was the problem? I removed the (flatten) and it worked.#2019-01-2811:10ronnySeems something on the server side is inconsistent…#2019-01-2816:09marshallWhat is the type and cardinality of :rule/id ?#2019-01-2803:41henrikHaving created a CloudFormation template that produces an ElasticSearch cluster, isolated in its own VPC and exposed via an endpoint service, hooked up to CodeDeploy for configuration updates, I have a newfound respect for the people who put Datomic Cloud together.
Making AWS hook things up programmatically for you is like arguing contract details with a bureaucrat from a national telecom.#2019-01-3016:58eoliphantWe use terraform, it's head and shoulders above cloud formation for this kind of stiff#2019-02-0218:54henrik@U380J7PAQ Interesting, I'll have to look into it.#2019-01-2816:51Oleh K.I don't understand how I can upload my current datomic database to the cloud, can anybody tell me?#2019-01-2816:53Oleh K.restore-db function doesn't recognize the cloud type url#2019-01-2816:54benoitThe on-prem storage is not compatible with cloud's.#2019-01-2816:55marshall@okilimnik https://docs.datomic.com/on-prem/moving-to-cloud.html#2019-01-2816:55marshallWe don’t currently have tooling for migrating between On-Prem and Cloud#2019-01-2816:57Oleh K.thanks for clarification#2019-01-2818:14benoitThat might be obvious to some people here but it was not for me until this morning so I thought I would share. The top paragraph at https://docs.datomic.com/on-prem/transactions.html#identify-entities is a bit misleading because you can assert facts about entities that are not "already in the database". The facts get added even if the :db/id does not exist or the entity was retracted. That might not be a problem with one peer but with multiple peers you can end up in situations where an entity gets retracted by one peer and updated by another right after. I'm guessing that's a good argument to use lookup refs instead of :db/ids in transactions. Otherwise you would have to check that the entity id already exists in a tx function every time you want to assert new facts. Did I miss something?#2019-01-2818:51bkamphausI agree that this is a reason to use lookup refs: if you intend the transaction to succeed only when the entity to which the facts refers can be found in the database already. I think it’s worth taking care in the language to note that Datomic does not have a separate notion of modeling entity existence — just whether or not there are facts about the entity.#2019-01-2819:09benoit@bkamphaus I think it's worth clarifying this in the docs and not mention things like "an existing id for an entity that's already in the database"#2019-01-2820:27favilathere is a sort of "entity existence" check in datomic. The "t" is a db-wide counter incremented for each tempid (those that don't resolve to an entity id). if an entity id's "t" bits exceed the db's current t, datomic may say the entity doesn't exist#2019-01-2821:24grzmWhat are people doing wrt development of Datomic Cloud/ions on Windows? We’ve got a partner where one of the developers uses Windows.#2019-01-2821:43Dustin Getz@me1740 you can use :db.fn/cas to detect a concurrent modification and fail a transaction#2019-01-2909:28stijnanyone seen this and knows what the meaning / cause of this error is? (datomic cloud)#2019-01-2909:28stijn{:type clojure.lang.ExceptionInfo
:message Next offset 3000 precedes current offset 2000
:data {:datomic.client-spi/request-id 8e372a82-abc5-4a52-805e-fe90786c82f5, :cognitect.anomalies/category :cognitect.anomalies/fault, :cognitect.anomalies/message Next offset 3000 precedes current offset 2000, :dbs [{:database-id fa80ec7a-d124-4cf2-971e-e43c8d7e8516, :t 1885, :next-t 1886, :history false}]}
:at [datomic.client.api.async$ares invokeStatic async.clj 56]}#2019-01-2911:00joshkhjust curious what's going on here. 🙂 when i run my (datomic) clojure project i consistently get messages that some of my deps are being downloaded from datomic's s3 releases bucket. always the same ones.
$ clj -Stree
Downloading: com/amazonaws/aws-java-sdk-cloudwatch/maven-metadata.xml from
Downloading: com/amazonaws/aws-java-sdk-dynamodb/maven-metadata.xml from
Downloading: com/amazonaws/aws-java-sdk-kinesis/maven-metadata.xml from
Downloading: com/amazonaws/amazon-kinesis-client/maven-metadata.xml from
#2019-01-2912:55Alex Miller (Clojure team)Those are the version metadata files for each of the artifacts. You should need to download them any time there are new releases (which I’d guess are about weekly right now) but they should be cached in your ~/.m2/repository#2019-01-2912:56Alex Miller (Clojure team)Any chance that’s getting cleaned between builds?#2019-01-2912:57Alex Miller (Clojure team)Classpaths get cached in your local dir under ./.cpcache too - that should be in front of the m2 stuff assuming your deps.edn isnt getting updated#2019-01-2914:37joshkhinteresting. if i don't update deps.edn then i can see the cache at work: when i run clj -Stree for a second time there are no downloads. however, launching to Ions still triggers the download process.#2019-01-2911:26joshkhi'm guessing it's because i've included the aws jdk, amazonica, and ions, and that there's a version mismatch. might this affect the size of my ions push to code deploy? i've seen a big increase in the overall time to deploy.#2019-01-2912:58Alex Miller (Clojure team)Might be something with that, prob something the Datomic team could answer better than I #2019-01-2913:34joshkhthanks, alex. in the mean time i'll play around with some exclusions and see where i end up.#2019-01-2913:29Per WeijnitzHi! I've recently started with Datomic, so please bear with me. Is there a way to perform a full scan of the database (with the purpose of studying the internals while learning)? I've tried things like
(d/q '[:find ?e ?a ?v ?tx :where [?e ?a ?v ?tx]] (get-db))
but Datomic refuses with
:db.error/insufficient-binding Insufficient binding of db clause: [?e ?a ?v ?tx] would cause full scan.#2019-01-2913:40joshkha few people have been looking into this for the purpose of cloning a datomic (cloud) db, at least until Cognitect provides official support (please please please). if you're interested in the inner workings and mappings then maybe d/datoms might be of interest?
(seq (d/datoms (client/db) {:index :eavt}))#2019-01-2913:41joshkheven if you could query for everything, i think it would timeout#2019-01-2915:00Per Weijnitz@U0GC1C09L d/datoms looks very useful to me in my studies, thanks! Let's hope Cognitect adds support for full scan soon!#2019-01-2915:04joshkhno problem! full scan probably isn't what we want 🙂 maybe a clever way to copy over tables and s3 buckets (although i appreciate it's not that easy). just curious, are you using datomic cloud?#2019-01-2917:28souenzzoadd [?a :db/ident ?ident] then you can query#2019-01-2923:31favilahttp://tonsky.me/blog/unofficial-guide-to-datomic-internals/#2019-01-3008:14Per Weijnitz@U2J4FRT2T @U09R86PA4 Thanks for the advice! I'll dig into it today and see what I can learn.#2019-01-3008:19Per Weijnitz@U0GC1C09L I see, that seems practical indeed. No, I use on-prem with the dev backend. Hmm... that makes me think. Did you ask this because it may be possible inspect the datoms directly by inspecting the database backend? (postgres table contents for example)#2019-01-3009:04Per Weijnitz@U2J4FRT2T That does indeed work!#2019-01-3011:23souenzzoIt's like
- "I can't query all data"
- "Can you query only valid data?"
- "Sure I can!"#2019-01-2917:19eraserhdIs there a protocol I can implement to be able to use a data structure with d/q?#2019-01-2917:28benoitYou should be able to pass any collection or relations to d/q. (d/q '[:find ?b :in $ ?a :where [?a :a ?b]] [['a :a 'b]] 'a)#2019-01-2918:37eraserhdbenoit: I know about that, but this is a Clara Rules session. I could pass the results of a query, I think, but the fact that d/q supports databases and vectors suggested to me that maybe there's a protocol that I can implement.#2019-01-3011:34souenzzoI'm interested on that too#2019-01-3012:17souenzzoI think that #datascript may be a better solution to make it.#2019-01-2919:16crowlHi, can I pull many entities in one query with the datomic client api?#2019-01-2920:52joshkhhey @U44C8GM7T, how do you mean? the client api returns as many different entities as matched in your :where clause. can you elaborate?#2019-01-2920:59joshkhwithout knowing more this might not answer your question, but here's an example of pulling many (all) "item" entities that have a sku and are on sale, and returns all of the entities' attributes:
(d/q '{:find [(pull ?item [*])]
:in [$]
:where [
[?item :item/sku _]
[?item :item/on-sale? true]
]}
db)
#2019-01-3007:24kommenjfyi, the link to SQUUID in https://docs.datomic.com/on-prem/best-practices.html#unique-ids-for-external-keys is a 404#2019-01-3013:46marshallFixed. Thanks ^ !#2019-01-3014:59Oleh K.I'm trying to develop via datomic-socks-proxy, but I cannot receive the most part of queries because of request timeout. I have a satellite internet and a big ping. How can I fix it?#2019-01-3015:17okocimif you use the 1-arg form of d/q, you should be able to set a timeout that suits your needs:
(d/q
{:query '[:find ...] :args [db ...]
:timeout 60000}) ; or whatever you need
#2019-01-3018:06Oleh K.Thanks, I just thought that if there is no timeout than it is maximized and therefore the problem is not in queries#2019-01-3018:07Oleh K.Are there some defaults?#2019-01-3018:09Oleh K.Seems strange to be forced to use 1 args form for local dev#2019-01-3019:55Oleh K.I've checked, this timeout has nothing to do with my error
{:status -1, :status-text "Request timed out.", :failure :timeout}
#2019-01-3019:56okocimwhat’s the query that you’re running?#2019-01-3020:23Oleh K.I'm sorry, the reason really was in my queries (I'm migrating from on-prem to the cloud), thanks for your time#2019-03-2715:48favilaAnd this is an excellent article explaining why you usually can't use transaction time as the only time in your data (or vice-versa) and why datomic is not a great fit for time-series data: https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2019-03-2715:50favila(the other reason is datomic is optimized for reads not writes)#2019-03-2717:43souenzzohow to :find [?e ...] in datomic cloud?#2019-03-2720:31dmarjenburghThe find-coll specification is not supported in the cloud version (yet?)#2019-03-2718:53souenzzothere is docs about how to redirect cast to stdout (at my repl/localhost)#2019-03-2719:14zalkyHi all, I have a datomic on-prem transactor deployed via cloudformation (as per the docs) and I'm looking into how to add classpath functions. Any advice or pointers to documentation?#2019-04-0420:23timgilbertThe main docs are here: https://docs.datomic.com/on-prem/database-functions.html#2019-03-2723:01shaun-mahoodAny chance the correct solution for the problem at https://forum.datomic.com/t/dependency-conflict-with-ring-jetty/447/7 could be added to the Datomic Cloud docs? It didn't even cross my mind that there might be a conflict between these, and my normal workflow would be to check the dependencies or issues on github which doesn't really work here.#2019-03-2809:35quadronlet's say I have a json map with 10 fields; each json map is modeled as an entity in a datomic schema and the json fields map neatly to entity attributes. does that mean that asserting every json map requires at least 9 datoms? (one being the identity index)#2019-03-2813:06benoitIf you need to update all 10 fields, yes. But usually you just update a subset of the fields.#2019-03-2809:58misha+1, if you save each map as a separate empty transaction#2019-03-2815:03eoliphanthi, I’m running into an issue where cloud/ion deployments are failing, and I’ve tracked the issue to the failure of one the lambdas in the deployment step function. I’m getting the following error back
{
"error": "States.DataLimitExceeded",
"cause": "The state/task 'arn:aws:lambda:us-east-1:xxxx:function:dat-NZ-Compute-CreateLambdaFromArray-1915A1Q1QXEG8' returned a result with a size exceeding the maximum number of characters service limit."
}
any ideas what might be causing this?#2019-03-2815:41Joe Lane@eoliphant I know this is weird, but check if shortening the description of your ion fixes it. I may have run into something similar in the past and that fixed it.#2019-03-2815:41Joe Lane(Not sure how long the description is, if its super short then maybe thats not the obvious fix)#2019-03-2815:42eoliphantYeah, i’ve seen that before as well, didn’t think any of the new ones were longer than ones that were working but will double check#2019-03-2815:44dangercoderMany thanks for the great Datomic Cloud tutorial. 🙂✌️
https://docs.datomic.com/cloud/setting-up.html#2019-03-2817:25souenzzocan I send cast to stdout?#2019-03-2818:14shaun-mahood@jeff.terrell So I guess that error message is there already - https://docs.datomic.com/cloud/troubleshooting.html#dependency-conflict#2019-03-2818:17jeff.terrellAh, great! Thanks for letting me know.#2019-03-2818:52dangercoderAnyone with any tips and tricks on how I can get an overview of all schemas in a datomic cloud database? I used to use Datomic console for this before when I was using a peer.#2019-03-2818:59dangercoderSorted it with some queries. I guess I could build some private tool to get an overview of it 🙂#2019-03-2821:36mdhaneyI haven’t tried it yet, but you could look into REBL.
https://youtu.be/c52QhiXsmyI#2019-03-2819:34Jakub Holý (HolyJak)I'd believe I found a mistake in the Datomic tutorial https://docs.datomic.com/on-prem/tutorial.html but I surely just missed something. They 1. Transact inorrect inventory counts, 2. Retract one, 3. Update the other, 4. Look at the DB as-of #1 so I'd expect to see what was added, ie [[:db/add [:inv/sku "SKU-21"] :inv/count 7]
[:db/add [:inv/sku "SKU-22"] :inv/count 7]
[:db/add [:inv/sku "SKU-42"] :inv/count 100]]
but instead the query shows (d/q '[:find ?sku ?count
:where [?inv :inv/sku ?sku]
[?inv :inv/count ?count]]
db-before)
=> [["SKU-42" 100] ["SKU-42" 1000] ["SKU-21" 7] ["SKU-22" 7]] Why is sku 42 there twice when the cardinality of inv/count is one and when it was only updated from 100 to 1000 in the last tx #3? Can anyone be so kind and explain? #2019-03-2903:33johnjelinekdoes anyone store encrypted PII data in their datomic cloud dbs (for GDPR)? Where do you store your keys?#2019-03-2903:40johnjelineknvm, just learned about this https://docs.datomic.com/on-prem/excision.html#2019-03-2903:41steveb8nNo excision in Cloud (yet) but here’s a good description of what’s required https://vvvvalvalval.github.io/posts/2018-05-01-making-a-datomic-system-gdpr-compliant.html#2019-03-2906:35steveb8nyou can store the keys as encrypted SSM params and read them using ion/get-params. just make sure they start with “datomic-shared” or they won’t be accessible without extra IAM perms (this caught me out)#2019-03-2908:24asierhttps://github.com/magnetcoop/secret-storage.aws-ssm-ps#2019-03-2908:24asierAWS System Manager Parameter#2019-03-2908:24asierhttps://medium.com/magnetcoop/gdpr-right-to-be-forgotten-vs-datomic-3d0413caf102#2019-03-2913:15dmarjenburghWhat is the importance of the KeyName parameter on the CloudFormation template? It's not required to connect to the bastion host and you never connect to the compute nodes. Is it used by CodeDeploy or something?#2019-03-2914:44johnjelinekI thought it was required to connect to the bastion host#2019-03-2917:20dmarjenburghThe startup script of the bastion generates a keypair and uploads the public key to s3 which the proxy script downloads. So the ec2 keyname is actually not used. #2019-03-2919:21ghadithere are ssh keys used for the Datomic nodes themselves -- I think that's what it's for#2019-03-2919:21ghadithey're distinct from the bastion key#2019-03-3004:37NolanCurious if anyone has any recommendations on managing datomic connections in an aws lambda. Currently I essentially do this:
(def client (delay (d/client ...)))
(def conn (delay (d/connect @client {:db-name ...})))
(def q '[:find ...])
(defn somefn [db]
(let [data (d/q q db)]
...))
(defn -handleRequest [_ is os _]
(somefn (d/db @conn))
...)
It works most of the time, but occasionally a lambda will spin up and only ever encounter an anomaly on every invocation: Unable to execute HTTP request: Connect to <storage bucket>:443 failed: connect timed out. It’s as if the connection was never made from the get-go, and until that lambda dies, it will only fail. Do i need to be handling any sort of expiration or refreshing of either the client or the connection? Are there any artificial or hard limits on number of connections in either solo or prod? Would be interested in anyones experience with using datomic in lambda, and how they managed making and maintaining the connection.#2019-03-3013:55Daniel HinesI have a database of square and edge entities. Each square has 4 refs to an edge, and squares may share the same edge. Given a square’s ident A, how can I query for every other square B that shares an edge with the A, or every square C that shares an edge with B, or every square D that shares an edge with C… etc. until there are no more connected squares?
To make it slightly more concrete, given the database:
[{:db/ident :A :edge/right :e1}
{:db/ident :e1}
{:db/ident :B :edge/left :e1 :edge/right :e2}
{:db/ident :e2}
{:db/ident :C :edge/right :e2}
...]
How can I recursively query for the set of entities who’s values for the attributes :edge#{top bottom left right} are the same (in this example db, the result should be #{:A :B :C})#2019-03-3014:07mg@d4hines Datalog rules can do that. You might do something like,
[[(connected-square ?a ?b)
[?a :edge/right ?e]
[?b :edge/left ?e]]
[(connected-square ?a ?b)
[?a :edge/left ?e]
[?b :edge/right ?e]]
[(connected-square ?a ?b)
[?a :edge/top ?e]
[?b :edge/bottom ?e]]
[(connected-square ?a ?b)
[?a :edge/bottom ?e]
[?b :edge/top ?e]]
[(connected-square ?a ?b)
(connected-square ?a ?s)
(connected-square ?s ?b)]]]
#2019-03-3014:07mghttps://docs.datomic.com/on-prem/query.html#rules talks about how to use the#2019-03-3014:09Daniel HinesThanks @michael.gaare ! I’ll try that out.#2019-03-3014:12mgYou need to pass that rule into the query, and then you can find all the connected squares with something like:
[:find ?connected :in $ % ?square :where [(connected-square ?square ?connected)]]
#2019-03-3014:14benoit@michael.gaare’s solution miss :C I think. It seems that squares can be included in other squares. :B and :C share the same right edge.#2019-03-3014:16mgit doesn't handle squares that have identical edges, no. Given that they're squares, if they share one edge that's the same side, aren't they by definition the same square?#2019-03-3014:17benoitMaybe 🙂#2019-03-3014:18mgif you needed to extend to encompass that idea, could write another rule that does edge comparison#2019-03-3014:21Daniel Hines@me1740 is correct - squares that share the same exact edge on the same attribute are not necessarily the same.#2019-03-3014:21Daniel HinesThe trick is that edges don’t have length - they’re lines, in the mathematical (infinitely extended) sense.#2019-03-3014:22mglike maybe,
[[(shares-edge ?e ?square]
[?square :edge/right ?e]]
[(shares-edge ?e ?square]
[?square :edge/left ?e]]
;; ... etc
]
#2019-03-3014:23mgthen connected-square rule clauses instead look like, [(connected-square ?a ?b) [?a :edge/left ?e] [(shares-edge ?e ?b)]]#2019-03-3014:25Daniel HinesThanks, let me try that out.#2019-03-3014:26mgthese sound more rectanglish to me then 😄#2019-03-3014:26Daniel HinesThey are. I didn’t think the geometry would matter for the Datalog 😅#2019-03-3014:27mgI wanted to make simplifying assumptions to enable my own laziness, see#2019-03-3014:27mgI guess you could write a function to output these#2019-03-3014:28Daniel HinesYeah, we have a recursive function that uses db/entity to do this, but I wanted to see if it was possible to do it in pure datalog.#2019-03-3014:28mgA function to write the rules I mean#2019-03-3014:28Daniel HinesOh? Do tell.#2019-03-3014:28mgcuz it's super tedoius#2019-03-3014:29Daniel HinesIndeed 😛#2019-03-3014:30Daniel HinesWhat’s the most effective way to do that? Do I need to use splicing and things like in macro’s?#2019-03-3014:32mgthis should output what you want for shares-edge:
(let [edges #{:edge/right :edge/left :edge/top :edge/bottom}
edge-sym (symbol "?e")
square-sym (symbol "?s")]
(for [e edges]
[(list 'shares-edge edge-sym square-sym)
[square-sym e edge-sym]]))
#2019-03-3014:33mgthose sym bindings probably not necessary either#2019-03-3014:34mghere, even smaller:
(for [e #{:edge/right :edge/left :edge/top :edge/bottom}]
[(list 'shares-edge '?e '?s)
['?s e '?e]])
#2019-03-3014:41mgthen for my own sense of completeness, the connected-square rules can be built like this I think:
(cons
[(list 'connected-square '?a '?b)
[(list 'connected-square '?a '?s)]
[(list 'connected-square '?s '?b)]]
(for [e #{:edge/right :edge/left :edge/top :edge/bottom}]
[(list 'connected-square '?a '?b)
['?a e '?e]
[(list 'shares-edge '?e '?b)]]))
#2019-03-3014:43mgputting the recursion first might be bad for performance, though, so look out for that when you're playing with this#2019-03-3014:51benoitNot tested and I don't know how efficient it is but something like this might work:
'[
;; define what an edge is
[(edge ?s ?e)
(or [?s :edge/top ?e]
[?s :edge/right ?e]
[?s :edge/bottom ?e]
[?s :edge/middle ?e])]
;; two squares are directly connected if they share an edge
[(directly-connected-square ?s1 ?s2)
(edge ?s1 e)
(edge ?s2 e)]
;; the recursion
[(connected-square ?s1 ?s2)
(directly-connected-square ?s1 ?s)
(connected-square ?s ?s2)]]
#2019-03-3019:14Daniel HinesHow do I query for the value of an attribute that may or may not exist? I suppose I could do an or clause on two queries where one had the attribute and the other didn’t, but is there a short-hand for that?#2019-03-3020:27val_waeselynckI think there's a get-else function#2019-03-3020:27Daniel HinesYeah, I eventually found taht.#2019-03-3020:27Daniel HinesThanks!#2019-03-3019:16Daniel HinesOh, maybe I just have to put the one potentially non-existent attribute in the or…#2019-03-3019:18Daniel HinesThat didn’t quite work.#2019-03-3020:14mgThe attribute isn't even in the schema you mean?#2019-03-3020:28Daniel HinesNo, it’s in the schema.#2019-03-3020:27Daniel HinesThe get-else function did the trick 👌#2019-03-3020:29Daniel Hines@michael.gaare I’m usuing your for expression and it’s working beautifully. What’s the easiest way to compose that into a larger set of rules? I’m getting tripped up with quoting.#2019-03-3020:30mgJust concat them all together, pass it as the rules#2019-03-3020:31Daniel HinesI guess I’m maybe shaky on some Clojure basics here… why do the queries start off in quoted vector? Does it have to be quoted?#2019-03-3020:31mgIt's quoted because the symbols will be evaluated otherwise#2019-03-3020:32mgAlso the list forms#2019-03-3020:34Daniel HinesOk. This also seems to be doing what I expect:
(let [big-rule (for ...)]
`[~big-rule
[?e ?a ?v]
;; other rules...
])
#2019-03-3020:34Daniel HinesIs that typical?#2019-03-3020:34Daniel HinesWhat would your way look like?#2019-03-3020:35mgGenerally you construct rules as one thing, pass them as a query argument, and call it % in the inputs#2019-03-3020:36mgI'm not sure what you were doing with that macro, give me a second and I'll show you how I would do the shared edge thing we talked about earlier#2019-03-3020:46mg#2019-03-3020:46mgSomething like that#2019-03-3020:46Daniel Hines(def rules
(let [connected (vec (for [[a1 a2] opposite-edges]
[(list 'connected '?panel1 '?panel2)
['?panel1 a1 '?edge]
['?panel2 a2 '?edge]]))]
`[~connected
[(connected-recursive ?p1 ?p2)
(connected ?p1 ?p)
(connected-recursive ?p ?p2)]]))
#2019-03-3020:46Daniel HinesThat’s where I’m at so far.#2019-03-3020:48Daniel Hines(squares got renamed to panels)#2019-03-3020:52Daniel Hines(This isn’t working, btw.#2019-03-3020:52mgProbably don't want to use syntax quote (`) here#2019-03-3020:53mgThat's gonna mess up all the symbols#2019-03-3020:53Daniel HinesAh.#2019-03-3020:54mgJust use concat there#2019-03-3020:58mgso you could make that work at least syntactically by doing:
(let [connected ... ] ;; what you're doing here already seems fine
(concat
connected
'[[(connected-recursive ?p1 ?p2)
(connected ?p1 ?p)
(connected-recursive ?p ?2)]])
#2019-03-3020:59Daniel HinesThat works! Thanks. Much better than messing with quote/unquote 😛#2019-03-3021:00mgyou could also selectively quote symbols and construct lists if you want#2019-03-3021:01mgquoting the whole form is doing two things for you:
1. without quoting, if the clojure compiler sees a symbol like connected-recursive or ?p1 it will try to resolve that symbol to its value in the current namespace, and throw an exception most likely because it's not going to be there
2. if the clojure compiler sees an unquoted list (like (connected ?p1 ?p)) it will try to turn that into a function call, which will also fail#2019-03-3021:05mgYou can achieve the same result by individually quoting the datalog symbols, (eg '?p1 rather than ?p1, and constructing lists by using the list function or by quoting the whole list#2019-03-3021:09Daniel HinesOk, that makes sense. I think where I got tripped up is I just assumed '[] meant something different than [].#2019-03-3101:47shaun-mahoodHas anyone got any good resources on general AWS stuff that would be applicable to ions? I’d love to read up a bit more from sources other than the docs that anyone would recommend.#2019-03-3116:28dangercoderI am working on a problem that i've never solved using a database before because I've always had this state locally. Let's say I have a "worker-entity" with a :worker/current-jobs-counter attribute. Whenever I start a job I will pick a worker where current-jobs-counter is below 10, and increment it by 1. Would that include a transaction function in datomic?#2019-04-0114:01favilaYou can do this with a transaction function: transaction function essentially have a lock on the entire database so there's no possibility of stale reads or conflicts.#2019-04-0114:03favilaBut you may also be able to do it with speculative writes that retry if a conflict was detected. Maybe you can use the builtin cas (check and swap) transaction function: https://docs.datomic.com/on-prem/transactions.html#dbfn-cas https://docs.datomic.com/cloud/transactions/transaction-functions.html#sec-1-2#2019-04-0217:47dangercoderi guess a transaction function in the cloud becomes a ion#2019-04-0218:35favilasorry I don't know cloud as well#2019-04-0219:12dangercoderNo worries, I am very thankful for your replies. Made some good progress conceptually 🙂#2019-04-0115:02shaun-mahoodHas anyone else run into the issue of their first call to an web service ion timing out, while the rest work fine?#2019-04-0115:17adamfeldmanYou might be running into something related to cold starts: https://blog.octo.com/en/cold-start-warm-start-with-aws-lambda/, https://epsagon.com/blog/how-to-minimize-aws-lambda-cold-starts/#2019-04-0115:19shaun-mahoodYeah, I was thinking it was something along those lines. Is that a common thing for ions? I haven't seen anyone specifically discussing cold starts with ions, so I wasn't sure if I was doing something weird or if it was expected behavior. Thanks for the links!#2019-04-0116:05adamfeldmanSorry, I don't use Ions in particular. Many AWS Lambda users use hacks to keep their Lambdas warm -- often that is managed by the serverless framework you're using (https://serverless.com/blog/keep-your-lambdas-warm/). I wonder if Ions does or ought to do something similar#2019-04-0116:08shaun-mahoodYeah, I had this impression that ions had a way around the cold start. Thanks for the help!#2019-04-0116:22Joe Lane@U054BUGT4 What we have done at work is create a cloudwatch event that passes some json to the lambda function to keep it warm once per minute. It’s pretty trivial to implement.#2019-04-0116:24shaun-mahoodPerfect, that sounds like exactly what I need. I thought I might be missing some obvious datomic specific thing (or that my ion had something wrong with how I set it up. Thanks!#2019-04-0117:06hadilsI am using Datomic Cloud ions. I have keys in the SSM, and I am using the code in the documentation to retrieve those parameters. I am getting an error "Too many results" when trying it locally or after a push. The thing is, it used to work. Any thoughts?#2019-04-0117:07hadilsI have reduced the failure to this line (ion/get-params {:path "/datomic-shared/dev/stackz-dev/"})#2019-04-0117:08hadilsThe parameters appear in the SSM.#2019-04-0122:15grzmI'm running across a similar issue this afternoon when using Omniconf to populate our config. I'm not seeing a "too many results" error, though one could be masked by the Omniconf SSM code: we're just not getting values populated from SSM. In our case, too, nothing has changed in the config either in the SSM parameter store or in our config-calling code. In our case we're in us-west-2.#2019-04-0202:17hadilsSSM has a max limit of 10 parameters to load into Datomic. I wrote a workaround to load all the parameters into my Clojure call from SSM.#2019-04-0213:47grzmInteresting. Do you have a reference for this issue? Where this limit is documented?#2019-04-0219:00grzmFixed the Omniconf issue. The AWS API returns 10 parameters and a NextToken value when there are more. https://github.com/Dept24c/omniconf/blob/ssm-recursive-next-token/src/omniconf/ssm.clj#2019-04-0201:56Daniel HinesI have a fact along the lines of: “The value a is opposite value b“. I’d like to be able to use that fact in my database queries. The fact feels like a datom: [:a :meta/opposite :b]. How do I assert that as a fact in my db, such that I can do queries like [[?e1 ?a1 _] [?a1 :meta/opposite ?a2] [?e2 ?a2 _]]. I’m getting tripped up because :a is a value, not an entity.#2019-04-0203:19mg@d4hines it needs to be an entity for that to work#2019-04-0203:21mgYou can't assert it unless :a is an entity, and you'd probably want :b to be an entity as well so you can get reverse referencing. If you absolutely cannot use entities for some reason, then you could perhaps embed that fact in a rule or in a database function#2019-04-0205:59Daniel HinesHmm. I also have facts like, “entity 1 has an attribute a with a value of 100” and “entity 1 has an attribute b with a value of 50", e.g [[1 :a 100] [1 :b 50]]. How would i model that if a and b were entities?#2019-04-0210:45benoit@d4hines Datomic attributes are entities and you can add your own attributes to them.#2019-04-0210:47benoitSo if :a, :b, and :meta/opposite are all attributes, then [:a :meta/opposite :b] is a valid fact.#2019-04-0211:02benoitAlso you have to be careful with such "commutative" relationships. You might need to define a rule to be able to traverse the relationship in both directions :meta/opposite and :meta/_opposite. Unless people here have better ways to model this kind of relationship in Datomic?#2019-04-0217:16joshkhin datomic cloud, do deployed query functions have to be predicate functions, or can they return other values such as a filtered set of entities?
https://docs.datomic.com/cloud/query/query-data-reference.html#deploying#2019-04-0217:19marshallThey can return whatever you’d like: https://docs.datomic.com/cloud/ions/ions-reference.html#signatures#2019-04-0217:20marshallthis one https://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter.clj#L50 returns a set of values#2019-04-0217:21joshkhhow'd i miss that? 🙂 thanks#2019-04-0217:21marshallnp#2019-04-0217:21marshallalso: https://docs.datomic.com/cloud/query/query-data-reference.html#calling-clojure-functions#2019-04-0217:22joshkhare query functions more performant than manipulating the query results outside of the query?#2019-04-0217:24marshallnot intrinsically, but when using Cloud query functions run on your Datomic instances#2019-04-0217:24marshallso they’re running with the data in memory on the instance#2019-04-0217:25marshallit depends a lot on what you’re doing with the query function(s) as to whether you will see a difference in performance doing it inside the query or returning a set of results and doing the manipulation afterwards#2019-04-0217:26joshkhfor example, i have a query where i want to find the latest transacted entity (A), and then pull some information about an entity related to it (B). i can't use max on the tx values while simultaneously pulling the id of B. instead i have to pull A and B and then sort outside the query.#2019-04-0217:26joshkh> query functions run on your Datomic instances
gotcha!#2019-04-0217:27marshalli think you could do what you’re asking with a nested query#2019-04-0217:27marshallhttps://stackoverflow.com/questions/23215114/datomic-aggregates-usage/30771949#30771949#2019-04-0217:28marshallfind the max tx in the inner query#2019-04-0217:28joshkhdo nested queries have to be deployed query functions? i've tried nesting queries and get back something like datomic.api/q or datomic.client.api/q is not allowed. adding it to the allow list doesn't seem to make a difference.#2019-04-0217:29marshallthen use the outer query to get things related to it#2019-04-0217:29marshallno, they should be possible just generally#2019-04-0217:29marshallwhat version of Datomic?#2019-04-0217:29joshkhlatest cloud#2019-04-0217:29marshallhrm. there might be a whitelisting issue. you can deploy an empty ion that just has datomic.client.api/* in the allow list#2019-04-0217:31joshkhokay i'll look into that#2019-04-0217:31joshkhfor reference:
(->> (client/db)
(d/q
'[:find ?name ?c :in $ :where
[?e :community/name ?name]
[?e :community/category ?c ?maxtx]
[(datomic.client.api/q
'[:find (max ?tx) . :where
[_ :community/category _ ?tx]]
$)
?maxtx]]))
ExceptionInfo 'datomic.client.api/q' is not allowed by datomic/ion-config.edn clojure.core/ex-info (core.clj:4739)
#2019-04-0217:31marshallyeah, try with the allow and I’ll look into registering that for a future fix#2019-04-0217:33joshkhwill do. can you clarify what you mean by an empty ion? i have an allow list in my existing project which i guess applies to all ions deployed to the code deploy target#2019-04-0217:37marshallyep just add it to your existing one#2019-04-0217:37marshallif you werent using ions at all you’d need to make an ion-config.edn in a project and deploy it, but it wouldnt need any code or anything#2019-04-0217:37joshkhgotcha#2019-04-0217:46joshkhthat did the trick. thanks for the help @marshall#2019-04-0217:47marshallnp. glad it got you sorted#2019-04-0217:53souenzzoany plans about websockets in datomic ions?#2019-04-0218:25Joe LaneYou can already use ions with websockets#2019-04-0300:06steveb8nI didn’t know about this. Thanks!#2019-04-0218:26Joe LaneCreate an ion that handles apigatewayV2's websocket onConnect onDisconnect and onMessage and you’re good to go.#2019-04-0218:26Joe Lane@souenzzo ^^#2019-04-0218:27souenzzoThere is docs about that?
Where I call onConnect ?
It's via #pedestal .ions or via "raw" ions?
@lanejo01#2019-04-0218:29Joe LaneThere don’t need to be docs on it, the aws docs cover how to call a lambda, that’s good enough. There is no difference between a “pedestal.ions” ion vs a “raw” ion, other than the web handling interceptors that pedestal.ions lets you opt into.#2019-04-0300:54weiRich mentioned that ion deployment would roll So as long as you are not doing something really crazy, like updating in place your functions and things like that.
Is there any escape hatch for redefining functions? If not, is there a doc covering best practices around accretion?#2019-04-0314:47vemvCan I restore a database backup into a local datomic instance, but discarding all datoms later than timestamp T?
Would enable coarse-grained debugging. e.g. I can repeat the process many times, allowing one to make mistakes and roll them back cleanly#2019-04-0411:46benoitDatomic morning quiz: what is the difference between these 2 transactions?
[{:user/email "
and
[{:db/id [:user/email "
(`:user/email` is a :db.unique/ identity attribute)#2019-04-0412:31joelsanchezI guess the first one creates the entity if it doesn't exist, and the second one doesn't#2019-04-0412:31joelsanchezor am I missing the point? 😛#2019-04-0412:35benoitNo, that's it, but I still forget this after x years of Datomic 🙂#2019-04-0412:22Petrus TheronConnecting to remote Datomic Pro dev storage works in one Clojure project via, (d/connect "datomic:dev://.../mydb?password=somepass) but on another project it throws Syntax error (AbstractMethodError). When I print last exception, I see io.netty.util.concurrent.MultithreadEventExecutorGroup.newChild(Ljava/util/concurrent/Executor;[Ljava/lang/Object;)Lio/netty/util/concurrent/EventExecutor; `clojure.lang.ExceptionInfo: Error communicating with HOST <myhost> on PORT 4334."
Suspected invisible dependency version conflict, but after an hour of comparing lein deps :tree between the two projects and making the non-working project as near-identical to the working project, I can't figure out why I can't connect. Clojure 1.10 & com.google.guava/guava 2.0.#2019-04-0412:31Petrus TheronLonger exception:
clojure.core/eval core.clj: 3214
...
user/eval13015 REPL Input
...
com.theronic.data.datomic/eval13019 datomic.clj: 60
datomic.api/connect api.clj: 15
datomic.Peer.connect Peer.java: 106
...
datomic.peer/connect-uri peer.clj: 751
datomic.peer/get-connection peer.clj: 669
datomic.peer/get-connection/fn peer.clj: 673
datomic.peer/get-connection/fn/fn peer.clj: 676
datomic.peer/create-connection peer.clj: 490
datomic.peer/create-connection/reconnect-fn peer.clj: 489
datomic.peer.Connection/create-connection-state peer.clj: 225
datomic.peer.Connection/fn peer.clj: 237
datomic.connector/create-transactor-hornet-connector connector.clj: 320
datomic.connector/create-transactor-hornet-connector connector.clj: 322
datomic.connector/create-hornet-factory connector.clj: 142
datomic.connector/try-hornet-connect connector.clj: 110
datomic.artemis-client/create-session-factory artemis_client.clj: 114
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory#2019-04-0415:27Jakub Holý (HolyJak)This is surely trivial but, given a DB of movies with and id attribute, how do I find those whose ids are in #{11 22 33}? Thank you!
find.. where ?id in idset?
(use case: I get a list of invoices and want to keep those not in the DB meaning not yet processed. Or should I just get the (thousands) of invoices already in DB and do the set/diff locally?)#2019-04-0415:46benoit@holyjak https://docs.datomic.com/cloud/query/query-data-reference.html#collection-binding#2019-04-0415:48Jakub Holý (HolyJak)Thanks a lot!#2019-04-0416:01Jakub Holý (HolyJak)Question 2: since I'm using the peer library, the query runs locally and needs thus anyway to fetch all the ids so it would be just as efficient to ask Datomic for all the ids and do set/diff manually. Or is using collection binding more efficiently thanks to indices/something?#2019-04-0416:10benoitYes, your query will likely take advantage of Datomic indexes to not download all the ids on the peer.#2019-04-0419:34mbjarlandHigh level noob question: would using datomic ions for a normal page serving ring-type app (i.e. not an api or event triggered code etc) be a bad fit for some reason?#2019-04-0420:28Joe LaneNope#2019-04-0420:28Joe LaneNot any better or worse than running an ec2 instance#2019-04-0611:22Jakub Holý (HolyJak)Doesn't that depend on whether there is any performance penalty for going through Lambda? If there is just 1 Lambda and it is in a language with fast cold start than it is perhaps negligible. If there is 1 Lambda per Ion and especially if it is in Java/Clojure I suspect you will run into the cold start delay regularly, which can be reportedly 1/2 - few seconds...#2019-04-0814:11Joe LaneWe implemented a single cloudwatch event which sends a heartbeat every minute keeping all our lambdas warm. It’s really not as big a deal as people say. And when you need to scale up and have concurrent lambdas, you should set a low timeout on your api so it then retries against an existing warm lambda, while warming the new concurrent lambda.#2019-04-0501:12bherrmannFYI: https://sites.sju.edu/plw/datalog2/#2019-04-0507:24dmarjenburghI just noticed the limit on string sizes is 4096 characters (https://docs.datomic.com/cloud/schema/schema-reference.html#sec-9-1). I transacted some data with more characters (> 10k) and it stores and retrieves it just fine. It’s perfectly reasonable to set a limit on string sizes, but 4KB is often too small for our use case (The ddb limit is 400KB). How do you best deal with larger text values?#2019-04-0515:20cjsauerI’ve seen others in this channel mention making use of some external blob store (e.g. S3) to store “large” values, and then only storing the key/identifier/slug of that value in Datomic.
To retain immutability, the object in blob storage should never be directly modified, but instead should be copied-on-write. This way datomic’s history will always point to valid blobs. Hope this helps.#2019-04-0515:43dmarjenburghThanks, I was thinking about that too, and combining it with cloudsearch for querying inside the docs.#2019-04-0516:13Dustin GetzI believe large blobs impact performance which is the reason for the 4k limitation in Cloud#2019-04-0511:01teodorluHello!
Basic question. I want datomic clients on a different machine than the peer server. Can I just start the peer server remotely, let :8998 through the firewall and connect to it from my clients with the access credentials I set? Will that expose my access key and secret over the network? Will normal traffic be encrypted over the network? Or do I have to tunnel this myself if I want it encrypted?
I'm working with the docs here: https://docs.datomic.com/on-prem/getting-started/connect-to-a-database.html
Thanks!#2019-04-0511:03teodorlu(if my question is stupid because of X reason, please do shout out; I'm figuring this out for the first time)#2019-04-0512:13teodorluSlight update: I think I'm going the safe route; keeping the peer server running behind the firewall, and using an SSH tunnel for connection. Then I can keep SSH as the only means of access. I'm still not sure though, so replies are welcome.#2019-04-0521:08tylerDoes anyone have an opinion around the best practices for access control in datomic cloud? In the past I had used (d/filter ...) in the peer model to provide a filtered view of the database using middleware such that the risk of leaking data from a poorly written handler was low but it doesn’t appear that there is a straightforward solution to this problem in the client model.#2019-04-0901:37steveb8nI did this by ensuring all queries/pulls/writes go through a decorator pipeline (I used interceptors but plain fns would work) before they hit the client api. it works well as long as you ensure all client api calls are proxied by the pipeline#2019-04-0901:39steveb8nunfortunately I can’t share the code because it has proprietary design included but it’s not magical, just adding where conditions etc#2019-04-0901:40steveb8npulls are trickier. in that case you have to check the results after they come back from the client api call#2019-04-0815:24Joe LaneHas anyone here successfully downgraded from a datomic cloud production topology back to a solo topology? I tried this weekend and it never finished. Ultimately I just rolled the update back and am now stuck with a production topology.#2019-04-0818:01currentoori noticed, in transaction functions sometimes data structures are vanilla clojure, like a vector, but then sometimes they are something like a java.util.ArrayList, is there any pattern to this?
i had a call to vector? in my transactor function that starting failing sporadically, is vector and java.util.ArrayList the only two? or can there be other types too?#2019-04-0818:16marshall@currentoor Datomic doesn’t make any guarantees about Java/Clojure type preservation across the wire#2019-04-0818:17marshallif you need to know that you have a vector, you’ll want to check for it#2019-04-0818:22currentoor@marshall, that sounds fine but i just need to know all the things that a vector, from the peer, can be converted to? is java.util.ArrayList it? or can it be other types as well?#2019-04-0907:58Ivar Refsdal@currentoor I had the same issue, and while I don't recall the exact details of it, I solved it using clojure.walk/prewalk and checking for java.util.List inside my db fn:
(clojure.walk/prewalk
(fn [e] (cond (and (instance? java.util.List e) (not (vector? e))) (vec e)
:else e))
x)
#2019-04-0908:12Ivar Refsdal@currentoor I believe I've also encountered java.util.Arrays$ArrayList, at least that is what my tests are using to reproduce the behaviour encountered in production. I'm using e.g. (java.util.Arrays/asList (into-array [1 2 3])) in my tests to test the convert function#2019-04-0916:16currentoor@UGJE0MM0W thanks#2019-04-0911:13jaihindhreddyI have a service that I use, and I want to store it's response (EDN) under a particular key in Datomic. AFAIK Datomic doesn't currently support this document like storage? How should I do something like this? Store them externally, and put their IDs in Datomic?#2019-04-0911:45benoitYes, I usually store blobs like these in S3 and the key in Datomic.#2019-04-0911:14jaihindhreddyThe API response is highly dynamic, doesn't use namespaced keywords, and most of the time, I don't foresee needing to use Datalog into it.#2019-04-0911:14jaihindhreddyI'm fine with it being an opaque value.#2019-04-0916:43johnjyou can store it as a string but datomic has performance issues with big strings >1Kb#2019-04-1012:09henrikLike benoit said, store it in somewhere like S3, DynamoDB or another KV-store. You can serialize the EDN with Transit (you can use the msgpack version of Transit for this).
If you generate a unique key (UUIDs are fine for this) every time you update the content, you effectively can preserve the immutability of Datomic, as earlier versions of the same entity will have UUID keys pointing to a different value in the KV store.#2019-04-1019:50rplevyI'm trying to figure out what I'm doing wrong setting up datomic free locally.
I've installed and started H2, I'm running a transactor and a console, I've uncommented h2-port=4335 in transactor.properties, but yet:
user=> (d/create-database "datomic:)
...
ActiveMQNotConnectedException AMQ119007: Cannot connect to server(s). Tried with all available servers. org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory (ServerLocatorImpl.java:799)
#2019-04-1019:57benoit@rplevy I believe Datomic free only works with in-memory and disk storage.#2019-04-1019:58rplevyWell I'm trying to do a fairly straightforward setup and just running it at all, and that's what I get#2019-04-1019:58rplevyI thought maybe I had to set up H2 for it to work but I get the same result either way#2019-04-1020:00benoitI had issues recently with Datomic Free and recent Java versions. Maybe you will have more luck with Java 1.8 or Java 1.9.#2019-04-1020:00rplevyLooks like I'm running openjdk 12#2019-04-1020:00rplevyMaybe I need to downgrade...#2019-04-1020:42rplevyInteresting, datomic doesn't work with the newer JDKs, downgraded to 1.9#2019-04-1020:43rplevyAt first I thought maybe it was because I was on the latest datomic free version but earlier versions failed in the same way, until I downgraded java#2019-04-1020:44rplevythanks @me1740!#2019-04-1117:14hadilsHow do I find out when a datom was entered into the database? I think I have to use :db/txInstant but I don't know how to select.#2019-04-1117:16marshall@hadilsabbagh18 https://github.com/cognitect-labs/day-of-datomic-cloud/blob/master/tutorial/log.clj#L28-L40 that section shows a minimal example of finding the transaction and the wall clock time that “something” happened in the db#2019-04-1117:17hadilsThanks @marshall#2019-04-1117:17marshallif you have the datom of interest in hand, you can pull the txInstant from the transaction entity, whose entity ID is the tx of the datom you’ve got#2019-04-1117:24hadilsThat works! @marshall#2019-04-1219:40joelsanchezhow would you query the ids of all the entities that reference a given entity, including transitively?
i.e. I have entity A, which has a :db.type/ref attribute whose value points to B, and B also has a ref attr whose value points to C, and I want to go from C to A
this is trivial to do if you know how many steps there are, but implementing it for the general case seems difficult to me without resorting to complicated graph traversal algos#2019-04-1219:41joelsanchezso is there a simpler way or do I need to do it the hard way? (i.e. graph traversal custom fn)#2019-04-1219:43joelsanchezjust to make my case clearer, this is to detect when I need to reindex an entity in elasticsearch. if a subentity (a component, usually, but not always) is changed I'll need to reindex the parent entity, but the link isn't always direct#2019-04-1219:49benoitIt seems like Datomic rules would work great for this. The question is whether you can have a list of attributes that you can lookup in these rules or whether you should consider any attribute of type ref.#2019-04-1219:51benoitAbsolutely not tested but I would try something in this spirit:
[(parent ?p ?e)
[?p ?attr ?e]
[?attr :db/type :db.type/ref]]
[(ancestor ?a ?e)
(parent ?a ?e)]
[(ancestor ?a ?e)
(parent ?p ?e)
(ancestor ?a ?p)]
#2019-04-1502:49Daniel HinesI spent a good chunk of time on this channel bothering you guys before arriving at an identical rule. Do the Datomic docs show this example, and I just missed it? Vaguely googling for phrases like "transitive query Datomic" leads to the mbrainz example which only goes to some depth specified upfront or this forum post which never arrives at this rule https://forum.datomic.com/t/how-to-do-graph-traversal-with-rules/132 If the Datomic docs don't have this example, it may be expedient to add it, or perhaps an authoritative blog post. This one rule is so impressively powerful, I think it deserves whatever hype it can get. "Traverse your graphs instantly with this one weird trick..."#2019-04-1607:58joelsanchezcompletely agree, and I had the same experience with the googling#2019-04-1607:58joelsanchezI was very close to implementing this nightmare https://hashrocket.com/blog/posts/using-datomic-as-a-graph-database#2019-04-1607:59joelsanchezthankfully, rules saved my day#2019-04-1219:55benoitYou might not even need the clause on the type of the attribute.#2019-04-1220:07joelsanchezI'm absolutely blown away, I never used rules before, but this works and I'm very grateful for your help#2019-04-1220:09benoitNo problem. Usually if you have a recursive problem, rules help.#2019-04-1221:11ghadi@joelsanchez as noted it will be much faster if you specify the exact attribute you need#2019-04-1221:11ghadi[?p ?attr ?e] <- not binding the attribute causes a much larger scan#2019-04-1221:22benoit?e should be bound to the child entity so the scan should not be much bigger, unless there are a lot of attributes on each entity they don't care about.#2019-04-1221:23benoitYou should of course call parent and ancestor with a bound ?e to find ?p. Not try to retrieve the whole database 🙂#2019-04-1221:46joelsancheznah, they are small entities, and they don't have that many ref attributes. since the child entities are usually components, they aren't referenced by more than one entity, and the depth is always lower than 3#2019-04-1409:44sooheonHey guys, are there any other diagrams like the [codeq schema](https://github.s3.amazonaws.com/downloads/Datomic/codeq/codeq.pdf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAISTNZFOVBIJMK3TQ%2F20190414%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20190414T072234Z&X-Amz-Expires=300&X-Amz-SignedHeaders=host&X-Amz-Signature=75fe7d45baee7d4d418e4d3086fff1483beb480d3cb45a769d6e74242c1effd8) floating around for reference?#2019-04-1513:55dmarjenburghI'm trying to push an unreproducible deployment using clj on-windows. I'm getting the following error:
{:command-failed "{:op :push :uname daniel-test}",
:causes
({:message "Data did not conform",
:class ExceptionInfo,
:data
#:clojure.spec.alpha{:problems
({:path [:local :home],
:pred clojure.core/string?,
:val nil,
:via
[:cognitect.s3-libs.specs/local
:cognitect.s3-libs.specs/home],
:in [:local :home]}),
:spec
#object[clojure.spec.alpha$map_spec_impl$reify__1997 0x52f8a6f4 "
It fails on :local {:home nil} which is not set by me.#2019-04-1515:48Joe LaneTry it without a hyphen#2019-04-1515:49Joe Lanemake it danieltest and see if that fixes the issue.#2019-04-1516:03shaun-mahoodIs there a way to get :db/tx (or :db/txInstant) from the pull API?#2019-04-1517:30ghaditransactions are entities, you can pull them @shaun-mahood#2019-04-1517:30ghadiI'm guessing you mean pull through a non transaction entity to a transaction entity?#2019-04-1517:32shaun-mahood@ghadi - Yeah, that's what I was trying - I have a nested list near the bottom of my pull, and I want to get the transaction for those entities.#2019-04-1517:35ghadii don't think that is possible#2019-04-1517:35shaun-mahoodOk, that's kind of what I figured. Thanks!#2019-04-1519:45favilaPart of the problem here is entities don't have transactions, only assertions/retractions do (entity + attribute + value combo)#2019-04-1519:45favilapull api doesn't operate at that granularity#2019-04-1521:11shaun-mahoodOh, interesting - I didn’t realize there was a distinction there.#2019-04-1611:16marcolI getting a weird error on AWS trying to get the DB of a datomic cloud instance: {:clojure.error/phase :compile-syntax-check, :clojure.error/line 1, :clojure.error/column 1, :clojure.error/source "datomic/client/impl/shared.clj"}#2019-04-1611:31marcolBy isolating the problem I now receive: Unable to resolve entry point, make sure you have the correct version of com.datomic/client on your classpath
when trying to get the client#2019-04-1613:59ghadiwhat is your dependency in your deps.edn or project.clj? @marcol#2019-04-1614:05marcolcom.datomic/client-cloud "0.8.71"
#2019-04-1614:38Laurence ChenHi, I encounter a Datomic design decision -- "How to design at the situation that we need generalized compare-and-swap semantic in Datomic?"
I have sorted out my question in stackoverflow.
https://stackoverflow.com/questions/55706444/how-to-design-at-the-situation-that-we-need-generalized-compare-and-swap-semanti
I really appreciate anyone can give me some hints. Thx.#2019-04-1614:53benoitI would consider the UX of the system. Does it make sense for admins to work on the same request at the same time? If not, I would implement a lock mechanism with CAS so the admins can get immediate feedback whether they should work on an item or someone else just started to work on it. It has the drawback of requiring one more click for the admin before starting working on a request but the advantage of not spending time on a request is someone is already working on it.#2019-04-1614:54benoitIf the extra click is a problem you can always automatically lock when delivering the request to an admin and expire the lock if there is no activity after a certain period of time.#2019-04-1614:58benoitBut if you don't mind wasting admin's time, I would just do the modifications of the request in a transaction function to ensure atomicity.#2019-04-1715:18Laurence ChenHmmm, interesting answer. I deliberately create this story, but I never think that this problem can be solved from UX. Thank you.#2019-04-1621:56sooheonHi guys, if I’m attempting to model the equivalent of a multi-column primary key for uniqueness in Datomic (say I have a unique entity or “row” for each Player + Season + Team and a bunch of stats attributes), should I create a derived ID column that is (str player team season) and put db.unique/identity on that, or is there a way to specify that those three columns together represent a unique identity that should be upserted on?#2019-04-1622:14benoitWhen you want to ensure any kind of constraints across attributes, you should write a transaction function.#2019-04-1622:20sooheonThanks, this makes sense!#2019-04-1815:39adamfreyI'm looking for code that I've seen before in I believe the day-of-datomic repos or something similar where Stu Halloway wrote a series of queries showing how to debug a datalog query to see how many entities were being resolved at each step#2019-04-1815:40adamfreydoes have a link to what I'm talking about, I haven't been able to find it#2019-04-1815:46adamfrey@jaret do you know what I'm talking about?#2019-04-1815:47jaret@adamfrey are you talking about decomposing a query?#2019-04-1815:52adamfreythis is it, thanks!#2019-04-1815:47jarethttps://github.com/Datomic/day-of-datomic/blob/master/tutorial/decomposing_a_query.clj#2019-04-1815:48jarethelps you find the optimal clause ordering with little domain knowledge ^#2019-04-1815:50zalkyHi all, is there a benefit to using datomic.api/attribute over datomic.api/entity?#2019-04-1816:48timgilbertHey, modelling question here. Say I've got some images which are "confirmed" at some point, at which time I add a datom [eid :image/confirmed-at (Date.)], so the field is either present or missing for any given image.
Now I'm trying to find unconfirmed images. Is there a performance difference between these two clauses?
(not [?i :image/confirmed-at])
[(missing? $ ?i :image/confirmed-at)]#2019-04-1817:55dmarjenburgh@lanejo01 I’ve tried various things, but keep getting the same error.#2019-04-1817:56dmarjenburghHas anyone gotten an ion push/deploy working on windows?#2019-04-1818:37danierouxclojure -A:ion -m datomic.ion.dev '{:op :push}'
{:command-failed "{:op :push}",
:causes ({:message nil, :class NullPointerException})}
How do I debug this?#2019-04-1818:40pvillegas12Look at your code-deploy in aws, it should have the failed deploy and show you more information in the details page#2019-04-1818:44danierouxThere's nothing there - I am push-ing, so did not expect anything there yet.#2019-04-1819:15danierouxI fiddled with deps.edn, and not it's working 😕#2019-04-1819:16Joe LaneWhat happens when you use a prior commit, one that worked? I’ve had issues before and it ended up being that my code wasn’t compiling.#2019-04-1819:19danierouxIn this case, I just removed dependencies, and moved some to -Adev that doesn't get included in -Aion#2019-04-1819:19danieroux*and now it is working#2019-04-1918:33Joe LaneGlad to hear its working now!#2019-04-1818:37danieroux(it has worked before)#2019-04-1904:02sooheon{:command-failed "{:op :push :creds-profile \"sportspedia\"}",
:causes
({:message
"Unable to find a unique code bucket. Make sure that you are running\nDatomic Cloud in the region you are deploying to.",
:class ExceptionInfo,
:data {"datomic:code" nil}})}
#2019-04-1904:02sooheonHas anyone seen this error before?#2019-04-1904:02sooheonI’m able to connect and dev against Datomic Cloud, so it is running.#2019-04-1904:22sooheonI gave explicit :region key and it seems to work — apparently wasn’t picking up region from the profile.#2019-04-1907:28sooheonI’m having trouble understanding how ions work with component / mount. Is there a post about this anywhere?#2019-04-1907:41p14nI do an explicit mount/start on first request#2019-04-1908:38sooheon@p14n Ah I see. If you have N different endpoints, you just put the mount/start in each endpoint?#2019-04-1908:43p14nI only have one graphql one, so that's convenient. Thinking of putting the startup behind a special URL I call after deploy tho#2019-04-1908:44sooheonI see. Are you using lacinia and hooking up the graphql one to lacinia/execute?#2019-04-1908:46p14nYup#2019-04-1919:28staskhi, having an issue with ions tutorial, cant fetch ions dependency, getting following error:
Error building classpath. Failed to read artifact descriptor for com.datomic:ion:jar:0.9.28
org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read artifact descriptor for com.datomic:ion:jar:0.9.28
...
Caused by: org.eclipse.aether.resolution.ArtifactResolutionException: Could not transfer artifact com.datomic:ion:pom:0.9.28 from/to datomic-cloud (): Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: B62D357DA6A66B13; S3 Extended Request ID: GPC1UKcSUCjPudHXFq8r/krZOp03kN6L9DH717Sj3J91t/GLNvepfoV2g/0+dFRQmtRMnt6CVTw=)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:422)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:224)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifact(DefaultArtifactResolver.java:201)
at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:261)
... 25 more
the deps.edn is taken from the documentation:
{:deps {com.datomic/ion {:mvn/version "0.9.28"}}
:mvn/repos {"datomic-cloud" {:url ""}}
:aliases
{:dev {:extra-deps {com.datomic/ion-dev {:mvn/version "0.9.186"}}}}}
#2019-04-1919:45staskfigured it out, had to specify AWS_PROFILE environment variable#2019-04-2111:54joshkhAnyone up for helping me with a Datalog query? 🙂 Given a collection of "group" entities with references to "item" entities:
[{:label "group1"
:items [{:kind :hat} {:kind :scarf} {:kind :shoe}]}
{:label "group2"
:items [{:kind :shoe}]}
{:label "group3"
:items [{:kind :hat} {:kind :shoe}]}]
Is it possible to bind all group entities, and any related entities of :kind :hat, resulting in something like:
=>
[
["group1" [{:kind :hat}]]
["group2" [nil]]
["group3" [{:kind :hat}]]
]
I can't do :where [?group :items ?items] [?items :kind :hat] because that excludes groups without :hat items, and I want all groups regardless.
I could use map specifications in the pull syntax and then filter items for just :hats outside of the query, but I'm curious if there's a way to handle the scenario in pure datalog.#2019-07-0116:22favilatransaction functions run on the transactor whereas peer functions run on the peer. They have different environments#2019-07-0116:22favilayou could include the peer function on the transactor's classpath, but that's something you need to arrange ahead of time#2019-07-0116:23favilahttps://docs.datomic.com/on-prem/database-functions.html#classpath-functions#2019-07-0116:24fmnoiseI'm just thinking how I can call datomic api functions inside the transactor functions using d\... alias but same doesn't work with any other namespace#2019-07-0116:25favilayou could also install your peer function into the db and call it from your tx function with d/invoke#2019-07-0116:25fmnoisehow does it know that d\... means datomic/api#2019-07-0116:25faviladatomic api functions are on the transactor's classpath#2019-07-0116:26fmnoiseah I see#2019-07-0116:26favilathere's no magic to tx fns#2019-07-0116:27favilaaliases and requires are syntatic, it's not automatically shipping code#2019-07-0116:27favilathe only thing shipped is a pr-str of the code body#2019-07-0116:27favilaeverything else needs to be available to the transactor's runtime#2019-07-0116:28favilaclasspath functions IMO are a better solution most of the time#2019-07-0116:29fmnoiseI just thought for some reason that code which uses transactor is in its classpath by default#2019-07-0116:29favilayou maybe are confused because query doesn't work this way?#2019-07-0116:29fmnoiseyep, probably#2019-07-0116:30favilafunctions in a query run on your peer, so the same namespaces are available#2019-07-0116:30favilayou can invoke a fn in a query that you just def-ed a second ago in a repl#2019-07-0116:37fmnoisethanks @U09R86PA4#2019-07-0116:18fmnoiseeg I have myproj.utils.datetime ns with function shift-date
and I want to call it from datomic tx function
currently I have
Execution error at datomic.error/deserialize-exception (error.clj:154).
Syntax error compiling at (0:0).
#2019-07-0117:24grzmI see that the on-cloud Cloudformation templates have been bumped across the board for 480-8770, however only Compute is mentioned in the Release history. To be clear, if I'm on storage-470-8654.1, I don't need to update storage, correct? (doing a quick diff of the two storage templates indicates they aren't identical, though I haven't gone further to see if it's only whitespace)#2019-07-0117:29marshall@grzm Correct. You’ll see here: https://docs.datomic.com/cloud/operation/upgrading.html#how-to-upgrade
that you can check when the latest storage update was, and anything more recent than that is compute-only#2019-07-0117:33grzmThanks for confirming. Is there anything I need to do to coordinate ion library releases with the upgrades? Or do I only need to update the ion library when I want to use the new features?#2019-07-0117:35marshallwhen you upgrade Datomic the version of the ion libraries running on your datomic nodes will be updated to whatever is the latest at that time#2019-07-0117:35marshallbut there shouldn’t be any forward-breaking changes#2019-07-0117:35marshallif/when you push/deploy you may see that your deps are overridden#2019-07-0117:35grzmOh, right. Silly me. I think you owe me a playful jab for that one the next time we see each other in person.#2019-07-0117:29marshallexcluding anything listed as a critical release#2019-07-0117:42dlhello guys, I am trying to figure out how to make use of websockets with Datomic Cloud. So that pushes are directly sent to the user#2019-07-0117:42dljust like when using Datomic's tx-report-queue#2019-07-0118:15Joe Lane@dlorencic1337 What have you tried?#2019-07-0120:00cjsauerIs this anything to be alarmed about (pun intended)? It seems like CloudWatch is having trouble locating the auto-scaling policies for the datomic DynamoDB tables. I’ve updated my stack once or twice via CloudFormation…maybe they’ve been lost somehow?#2019-07-0120:21Joe LaneI’m reading through the pull documentation and it’s referring to a pull syntax like
(d/pull db [_release/artists] led-zeppelin)
but when I attempt it with
(d/pull the-db [_user/recommends] 11263397115183903)
I get No such namespace: _user, however with
(d/pull the-db [:user/_recommends] 11263397115183903)
I get
#:user{:_recommends [#:db{:id 66353327713036842}]}.
Does anyone have an example with the [_user/recommends] syntax that the documentation is referring to? Am I misunderstanding how it works?#2019-07-0120:38favilalink?#2019-07-0120:38favilathat looks like a typo to me#2019-07-0120:38Joe Lanehttps://docs.datomic.com/cloud/query/query-pull.html#reverse-lookup#2019-07-0120:39favilaI think that is just a typo#2019-07-0120:39Joe LaneOk thanks favila.#2019-07-0206:40dl@lanejo01 I have not much experience with Clojure and Datomic, just have tried the tutorials and created the first AWS API Gateway. Just curious if there is a way on how I can push from Datomic to the client via Websockets just like when using Sente with Clojure (https://github.com/ptaoussanis/sente)#2019-07-0209:55fdserrHi there. Who's got the best Dockerfile and Helm chart for Datomic Pro and wants to share them?#2019-07-0213:11cjsauerIs it possible to augment the solo topology with an NLB to take advantage of HTTP Direct? Just ran into Java cold start on my startup project, and I’m not quite at the point that I can justify the prod topo cost. Obviously the NLB would only have a single target, but it might let me bypass lambda.#2019-07-0213:59Joe Lane@cjsauer Set up a cloudwatch event to ping your lambda. It’s waaay simpler than I thought it would be.#2019-07-0214:06cjsauer@lanejo01 okay, that was plan B. I had a little deflambda macro in mind that could check for a “keep warm” header value. That way I could decorate all my ions with that short-circuit logic. #2019-07-0214:08cjsauerI’ll have quite a few lambda functions, so maybe I should write a “keep warm” lambda that pings all the others. Could register them in an atom as part of deflambda perhaps 🤔 #2019-07-0214:09cjsauerDoesn’t really solve the fact that cold starts affect every concurrent execution, but at that point I think the prod topo becomes viable anyway. #2019-07-0214:14cjsauerActually ion-config.edn could just be read to find all the lambdas that need warming. Much simpler. #2019-07-0214:18Joe LaneDo it in data, You can annotate the ion-config.edn however you want. Or make your own file.#2019-07-0215:03souenzzoAny plans about fast/forkable memdb on datomic.client.api ?#2019-07-0215:08souenzzoI'm using datomic-client-memdb that uses datomic-free. But now with com.datomic/client-cloud {:mvn/version "0.8.78"} it do not work anymore
https://github.com/ComputeSoftware/datomic-client-memdb
datomock it is also a great tool but do not work on cloud 😞
https://github.com/vvvvalvalval/datomock#2019-07-0215:19souenzzoThere is docs about :server-type :local?#2019-07-0215:27shaun-mahood@plexus Does https://github.com/lambdaisland/metabase-datomic support Datomic Cloud? I couldn't find any information on it either way.#2019-07-0215:50souenzzo@shaun-mahood probably not. It's hard(and expansive) to foss developer access datomic cloud, once to test it, you need to pay aws infrastructure 😞
"datomic-peer" has a awesome feature: it's simple and accessible. Just (d/connect "datomic:") anyware and it's ready to develop tools
"datomic-cloud" you need to setup AWS Machines, connect proxies, stay online (harder to run CI)... 😣
https://github.com/lambdaisland/metabase-datomic/blob/master/deps.edn#2019-07-0215:54shaun-mahoodThat's kind of what I figured - there's a lot of awesome things about running local Datomic, and the kind of things Metabase do seem pretty well geared towards it. Thanks for letting me know!#2019-07-0215:57fdserrI can confirm metabase-datomic is (so far) for Datomic on-prem only.#2019-07-0216:01plexus@shaun-mahood you rang 🙂#2019-07-0216:06plexusas mentioned metabase-datomic uses the peer api, so it's not currently compatible with Datomic Cloud. I do think it's doable, and perhaps not even that much work, but I haven't looked into it so far. If there's commercial interest I'd be happy to look into it and make an estimate. The development so far has been funded by http://runeleven.com (@fdserr et al) who don't have a need for Datomic Cloud support at this point. There's still quite a bit that could be improved in general as well, so if more companies would be willing to pitch in this could be something everyone would benefit from.#2019-07-0216:10shaun-mahoodMakes sense - nice to hear that Datomic Cloud sounds doable! I assumed it would be much harder without the peer api. Hopefully there will be enough commercial interest to keep improving things. Thanks for the answer!#2019-07-0216:20hadilsWhat version of com.cognitect.aws/ssm is in the {:url ""}?#2019-07-0216:29Alex Miller (Clojure team)Probably none, that should be in maven-central#2019-07-0216:30hadilsMy release is not working, Alex.#2019-07-0216:30hadils{:deploy-status "ERROR",
:message "Could not find artifact com.cognitect.aws:ssm:jar:697.2.391.0 in datomic-cloud ()"}#2019-07-0216:31hadilsThis used to work...#2019-07-0216:32hadilsDoes this not work anymore?
(defn release
"Do push and deploy of app. Supports stable and unstable releases. Returns when deploy finishes running."
[args]
(try
(let [push-data (ion-dev/push args)
deploy-args (merge (select-keys args [:creds-profile :region :uname])
(select-keys push-data [:rev])
{:group group})]
(let [deploy-data (ion-dev/deploy deploy-args)
deploy-status-args (merge (select-keys args [:creds-profile :region])
(select-keys deploy-data [:execution-arn]))]
(loop []
(let [status-data (ion-dev/deploy-status deploy-status-args)]
(if (= "RUNNING" (:code-deploy-status status-data))
(do (Thread/sleep 5000) (recur))
status-data)))))
(catch Exception e
{:deploy-status "ERROR"
:message (.getMessage e)})))#2019-07-0216:32Alex Miller (Clojure team)The error is misleading - it checks every repo but just reports the last error #2019-07-0216:33Alex Miller (Clojure team)ssm is https://mvnrepository.com/artifact/com.cognitect.aws/ssm#2019-07-0216:33hadilsI upgraded to 480--8770#2019-07-0216:33Alex Miller (Clojure team)What version of tools.deps.alpha are you using?#2019-07-0216:34Alex Miller (Clojure team)Or clj?#2019-07-0216:34hadilsOh, I am using a new computer. How do I install tools.deps.alpha onto MacOS?#2019-07-0216:35Alex Miller (Clojure team)Just back up and tell me from the beginning what you’re doing#2019-07-0216:36hadilsOk, I have a new MacOS laptop. I am pushing my code to Datomic Production topology for the first time since getting this computer.#2019-07-0216:37Alex Miller (Clojure team)Which uses clj right?#2019-07-0216:37hadilsYes. The version is 1.10.1.458.#2019-07-0216:43hadilsI just changed tools.deps.alpha to 0.7.516.#2019-07-0216:43hadilsStill doesn't work.#2019-07-0216:45Alex Miller (Clojure team)could you humor me on trying something?#2019-07-0216:45hadilsOf course!#2019-07-0216:46Alex Miller (Clojure team)brew uninstall clojure
curl > clojure.rb
brew install clojure.rb
#2019-07-0216:46Alex Miller (Clojure team)basically a forced downgrade to older version#2019-07-0216:46Alex Miller (Clojure team)then try it and see if it works#2019-07-0216:46hadilsOk.#2019-07-0216:50hadilsThat works! Thanks Alex.#2019-07-0216:50markbastianI'm really liking the new tuple features, especially for defining composite keys. Thanks! One question, though. It appears that if you use a composite key and want to update it you'll need to explicitly add that key to the transaction after the entity has been initially installed. Here's an example:
;Relevant composite key in schema. Other fields (person, time, balance, are primitives with cardinality one)
{:db/ident :person+time
:db/valueType :db.type/tuple
:db/tupleAttrs [:person :time]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
Now, given a connection I transact my schema and a single initial entry:
@(d/transact conn schema)
@(d/transact conn [{:person "Mark" :time #inst "2000-02-01" :balance 200}])
So far, so good. Now I want to correct my balance as of the above time:
@(d/transact conn [{:person "Mark" :time #inst "2000-02-01" :balance 100}])
I now get an error:
Syntax error (Exceptions$IllegalStateExceptionInfo) compiling at (src/datomic_playground/understanding_time.clj:61:1).
:db.error/unique-conflict Unique conflict: :person+time, value: ["Mark" #inst "2000-02-01T00:00:00.000-00:00"] already held by: 17592186045418 asserted for: 17592186045425
However, I can do either of these to update my entity:
;Works because I am explicitly creating the identity key
@(d/transact conn [{:person+time ["Mark" #inst "2000-02-01"] :balance 100}])
;Also works, as expected
@(d/transact conn [{:person+time ["Mark" #inst "2000-02-01"] :person "Mark" :time #inst "2000-02-01" :balance 150}])
Is this behavior (needing to explicitly create the id key after the initial transaction) expected? Is there a way to auto-imply the unique key after the first transaction?#2019-07-0219:52stuarthallowayHi @U0JUR9FPH! A couple of thoughts here.#2019-07-0219:54stuarthalloway1. You can alway use the actual entity id, so you don't need to use any identity to perform an update. (I presume you know this but am including it for completeness.)#2019-07-0220:02stuarthalloway2 If you do want to identify an entity by a unique key, you must indeed specify that unique key (not its constituents). This is clean and unambiguous.#2019-07-0220:58markbastianHey @U072WS7PE, thanks for getting back to me! Yes, case 1 makes total sense (and perhaps I should have put that as a third "works" example for completeness. For anyone following the conversation this would be 17592186045418 in this particular case.). I was indeed looking at case 2. My intuition was that transacting two schema-compliant items with the same constituent elements that form a unique key would resolve to that unique key and either insert or update as appropriate. However, I do see your point that future updates could cause ambiguity. Easy enough to handle once you know the behavior. Thanks!#2019-07-0216:50hadilsThanks @alexmiller#2019-07-0216:57Alex Miller (Clojure team)@hadilsabbagh18 thanks, I will follow up with the datomic team (I suspect latest version of clj has changed assumptions ion-dev is relying on)#2019-07-0216:58hadils@alexmiller I appreciate the time you took to help me. I know you are very busy.#2019-07-0216:58Alex Miller (Clojure team)well, I think I'm the one that caused it :)#2019-07-0302:07dlwhat is the best way to use websockets in datomic cloud, just like in datomic with sente?#2019-07-0303:40cjsauer@dlorencic1337 API Gateway recently announced WS support: https://aws.amazon.com/blogs/compute/announcing-websocket-apis-in-amazon-api-gateway/
I haven’t tried it myself yet, but it seems like a promising ion integration. I was planning on experimenting with it using Cognitect’s aws-api lib: https://github.com/cognitect-labs/aws-api
David recently added support for the ApiGatewayManagementApi. #2019-07-0303:50dlyeah I have heard that news message but didnt find of any tutorials on how to implement it with websockets#2019-07-0303:50dlthats why I asked, thank you man!#2019-07-0303:51dlI am curious because I have read that the Transaction Report Queue is only available with the peer inteface#2019-07-0303:52dlhow would you then go ahead to building an alternative that notifies the api gateway on changes?#2019-07-0304:01cjsauerI was thinking it might be possible to wrap transact! by reading the resulting :tx-data and placing it into a queue (maybe SQS or even a core.async channel). Then some other process would actually interface with APIGW. Build your own report queue basically. #2019-07-0304:02dlok interesting.#2019-07-0304:02dlI will look into it#2019-07-0309:04robert-stuttaford@stuarthalloway @jaret what do i need to do to get an existing database to use the new tuple stuff? transactor and peer are both on the new version. i can make a new database and transact tuple attrs, via the same peer, transactor and storage. i can't transact any tuple attrs to the existing database - it complains that :db/tupleAttrs doesn't exist.#2019-07-0312:55jaretYou’ll need to upgrade your schema with:
https://docs.datomic.com/on-prem/deployment.html#upgrading-schema#2019-07-0312:55jaretOh there is a typo in that anchor link ^ I am going to fix that.#2019-07-0319:52robert-stuttafordthanks @jaret - suggestion 🙂 include this bit of news in any blog post that announces features :+1:#2019-07-0319:34souenzzoI'm still gettin
:dependency-conflicts
{:deps
{org.clojure/clojure #:mvn{:version "1.9.0"} ...
when I {:op :push}
I just deployed a fresh cloudformation today using 480-8770 both solo and storage.#2019-07-0319:34Joe LaneAre you running with clojure 1.9 in your code base?#2019-07-0319:35souenzzo1.10.1 in my deps.edn#2019-07-0319:35jarethttps://forum.datomic.com/t/datomic-0-9-5930-now-available/1060#2019-07-0412:40ivanaHello. I try to run figwheel project from re-frame template, everything works fine until I add datomic to deps. In this case lein figwheel dev falls with
Figwheel: Cutting some fruit, just a sec ...
Syntax error (NoSuchMethodError) compiling at (figwheel_sidecar/repl.clj:1:1).
com.google.common.base.Preconditions.checkState(ZLjava/lang/String;Ljava/lang/Object;)V
I tried to exclude some and set exact versions (I found this in internet)
[com.datomic/datomic-pro "0.9.5927"
:exclusions
[org.eclipse.jetty/jetty-http
org.eclipse.jetty/jetty-util
org.eclipse.jetty/jetty-client
org.eclipse.jetty/jetty-io]]
;; directly specify all jetty dependencies
;; ensure all the dependencies have the same version
[org.eclipse.jetty/jetty-server "9.4.12.v20180830"]
[org.eclipse.jetty.websocket/websocket-servlet "9.4.12.v20180830"]
[org.eclipse.jetty.websocket/websocket-server "9.4.12.v20180830"]
but the problem still the same. What can I do?#2019-07-0412:48souenzzo@ivana both #datomic and #clojurescript use guava lib
https://mvnrepository.com/artifact/com.google.guava/guava
Usually, clojurescript version is higher then datomic version
Ignore it on datomic should fix#2019-07-0413:15ivanaThanks alot!
:exclusions [com.google.guava/guava]
solves the problem!#2019-07-0522:54eoliphantHi, I’m running into a situation on Cloud where we’re persistently getting busy indexing anomalies. upgraded to the latest rev, and have killed the transactors, but the problem hasn’t gone away#2019-07-0523:13marshall@eoliphant Can you look in your CloudWatch logs for any Alerts#2019-07-0523:14marshallif there are some, can you please share the text of the alerts#2019-07-0523:21eoliphantyeah there are some. trying to pick out stuff that might be relevant, vs our apps messages#2019-07-0523:33marshall@eoliphant https://docs.datomic.com/cloud/operation/monitoring.html#searching-cloudwatch-logs
search the Datomic system logs for “Alert - Alerts”#2019-07-0523:37eoliphantnothing is really jumping out. that reminds me, lol, i meant to submit a feature request. it would be nice to keep the datomic system stuff in a separate log group.
Ok, i’ve updated my filter, still nothing jumping out from datomic itself, it’s almost entirely our alerts that we’re logging when we retry/fail etc#2019-07-0523:39marshallAre there any alerts at all that are not from your own use of cast?#2019-07-0523:40eoliphantok,#2019-07-0523:40eoliphantthink i have something
{
"Msg": "IndexerJobException",
"Ex": {
"Via": [
{
"Type": "clojure.lang.ArityException",
"Message": "Wrong number of args (2) passed to: datomic.excise/pred",
"At": [
"clojure.lang.AFn",
"throwArity",
"AFn.java",
429
]
}
],
"Trace": [
[
"clojure.lang.AFn",
"throwArity",
"AFn.java",
429
],
[
"clojure.lang.AFn",
"invoke",
"AFn.java",
36
],
[
"clojure.core$partial$fn__5824",
"invoke",
"core.clj",
2624
],
[
"clojure.core$map$fn__5851",
"invoke",
"core.clj",
2755
],
[
"clojure.lang.LazySeq",
"sval",
"LazySeq.java",
42
],
[
"clojure.lang.LazySeq",
"seq",
"LazySeq.java",
51
],
[
"clojure.lang.RT",
"seq",
"RT.java",
531
],
[
"clojure.core$seq__5387",
"invokeStatic",
"core.clj",
137
],
[
"clojure.core$seq__5387",
"invoke",
"core.clj",
137
],
[
"datomic.index$merge_db$fn__21535",
"invoke",
"index.clj",
1635
],
[
"datomic.index$merge_db",
"invokeStatic",
"index.clj",
1621
],
[
"datomic.index$merge_db",
"invoke",
"index.clj",
1615
],
[
"datomic.indexer$merge_db",
"invokeStatic",
"indexer.clj",
185
],
[
"datomic.indexer$merge_db",
"invoke",
"indexer.clj",
181
],
[
"datomic.indexer$maybe_queue_index_job$fn__28554",
"invoke",
"indexer.clj",
250
],
[
"clojure.core$binding_conveyor_fn$fn__5739",
"invoke",
"core.clj",
2030
],
[
"datomic.async$daemon$fn__10439",
"invoke",
"async.clj",
146
],
[
"clojure.lang.AFn",
"run",
"AFn.java",
22
],
[
"java.lang.Thread",
"run",
"Thread.java",
748
]
],
"Cause": "Wrong number of args (2) passed to: datomic.excise/pred"
},
"DatomicIndexerDbId": "5f06733b-f7c1-4a6f-9aab-3c665b7d498d",
"Type": "Alert",
"Tid": 595,
"Timestamp": 1562369932970
}
#2019-07-0523:42eoliphantI’ve a lambda pulling stuff off of kinesis, so I turned that back on to generate some activity, these are popping up pretty frequently now#2019-07-0523:56jaret@eoliphant I am going to open a ticket up in your name and copy this info over#2019-07-0523:56eoliphantok thx#2019-07-0523:57eoliphantFYI the storage and compute are on 480-8770#2019-07-0523:57jaretProduction templates?#2019-07-0600:00eoliphantyep#2019-07-0600:01jaretSo this is a prod outage? And are you deploying ions?#2019-07-0600:03jaretI will ask some more questions on the case so the whole team can see.#2019-07-0600:04eoliphantnot a prod outage fortunately, but one of my teams is wrapping a product increment on monday, and this is impacting that, and yes we’re using ions#2019-07-0600:54stuarthallowaywe'll get you sorted ASAP#2019-07-0604:20steveb8nFWIW it’s great to see this kind of support response out in the open. builds confidence for me#2019-07-0604:52fdserrHas anyone got a workaround to enable the ping endpoint in a containerised Datomic without the thing blowing up? (0.9.5930, dev protocol)
docker run -v /config/:/config/ my-docker/transactor
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Critical failure, cannot continue: Error starting transactor
java.lang.RuntimeException: Unable to start ping endpoint localhost:9999
...
Caused by: java.lang.IllegalStateException: Insufficient configured threads: required=3 < max=3 for QueuedThreadPool[qtp1630087575]@61292997{STA
RTED,3<=3<=3,i=3,q=0}[
I know it is an upstream issue (Jetty). Poking Datomists for a possible hint to enable a transactor health check on K8s/Prom.... TIA!#2019-07-0605:18favilahttps://forum.datomic.com/t/jetty-max-threads-error-when-enabling-ping-health/603/21#2019-07-0605:20favilaWe worked around it by using one of the magic cpu limit numbers (k8s environment)#2019-07-0605:22favilaUnsure of root cause, if is jetty or java8#2019-07-0606:37fdserrGolden, just works. Can't imagine the sweat you put in this. May I ask how you found out the existence of this hidden key? Thanks a bunch @U09R86PA4!#2019-07-0606:42fdserrjust saw the next-next post with it 😃#2019-07-0606:54favilaActually what happened to us was we had it working fine (by accident it turns out) then we adjusted the limits later and it failed. We couldn’t believe it but we found the forum post as confirmation#2019-07-0606:55favilaI think we had 8 and lowered to 4 or something#2019-07-0607:19fdserr> We couldn’t believe it but we found the forum post as confirmation
😃#2019-07-0604:53fdserrBTW congrats Datomic team, the June release is packed with awesome features 🙏#2019-07-0621:44pvillegas12In order for a transaction function to not be applied (in the atomicity sense), do you need to raise an exception?#2019-07-0700:52favilaYes directly or indirectly#2019-07-0715:13fdserr@U6Y72LQ4A Throwing is the way to stop a TX, AFAIK. Throwing clojure.lang.ExceptionInfo helps us deal with explicit business constraints (userland/maybe-recoverable) and we let the rest blow up ("system" error).#2019-07-0715:35pvillegas12Perfect, thanks @U09R86PA4 @U05164QBS for confirming#2019-07-0818:09Nolancurious how others would approach building this, or if this smells:
(make-query {:ns/attr1 "v1" :ns2/attr2 :v2})
;; => [:find ?e :in $ :where [?e :ns/attr1 "v1"] [?e :ns2/attr2 :v2]]
in english, it takes a map, attribute => value, and produces a query for a single entity that has the given value for each attribute. ive implemented make-query using syntax-quote, and also experimented with the :in clause to do a similar thing, but didn’t get too far with it. would love some additional perspective#2019-07-0915:43hadils@nolan I like it. It seems elegant.#2019-07-0915:44hadilsAnyone solved the problem of using an aggregate (max n ?e) where you want to specify n from a function argument? Do I have to build the query up programatically?#2019-07-0919:16jarethttps://forum.datomic.com/t/datomic-cloud-480-8772/1071#2019-07-1009:46DanielHi, I'm trying to follow the datomic tutorial and connect to a running datomic server. But the following code gives an exception Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58). No subject alternative names present error
(require [datomic.client.api :as dc]))
(def config {:server-type :peer-server
:access-key "myaccesskey"
:secret "mysecret"
:endpoint "127.0.0.1:8998"})
(defn connect! []
(let [client (dc/client config)]
(dc/connect client {:db-name "hello"})))
(connect!)
I'm using the latest version of datomic-pro-starter 0.9.5930#2019-07-1010:16cichliMaybe try localhost instead of 127.0.0.1?#2019-07-1009:48DanielI've tried latest versions of AdoptJDK's OpenJDK 11, 12, 8 with no success. Running on MacOS.#2019-07-1009:49Daniel=> (pst)
ExceptionInfo No subject alternative names present {:cognitect.anomalies/category :cognitect.anomalies/fault, :cognitect.anomalies/message "No subject alternative names present", :cognitect.http-client/throwable #error {
:cause "No subject alternative names present"
:via
[{:type javax.net.ssl.SSLHandshakeException
:message "No subject alternative names present"
:at [sun.security.ssl.Alert createSSLException "Alert.java" 131]}
{:type java.security.cert.CertificateException
:message "No subject alternative names present"
:at [sun.security.util.HostnameChecker matchIP "HostnameChecker.java" 137]}]
#2019-07-1019:38pvillegas12I want to reference an entity with a tempid for :db/id for different datoms. Is this possible?#2019-07-1019:51ghadiif you talk about the same tempid in different datoms within a transaction, it will end up becoming the same entity when the transaction commits#2019-07-1019:52ghadi@pvillegas12 [[:db/add TEMPID a v t] [:db/add something-else a TEMPID]]#2019-07-1019:52ghadi^ create an entity and point to it -- in the same tx#2019-07-1019:53ghadi[{:db/id "tempA" ....}
{:db/id [:lookup/ref 42]
:points/to "tempA"}]#2019-07-1023:39Drew VerleeSo you can't upgrade a "solo" running Datomic stack?
https://docs.datomic.com/cloud/operation/upgrading.html#first-upgrade
> This upgrade process converts your Master Template Stack to the setup described in the Production Setup documentation.#2019-07-1023:55Drew VerleeI ask because i'm just currently using my setup to learn and i'm assuming the production setup is more expensive.#2019-07-1100:03Joe LaneYou cant downgrade#2019-07-1100:26ghadiyou can upgrade it no problem#2019-07-1100:26ghadiyou can also move from solo -> prod topology#2019-07-1100:27Drew Verleeboth of those would seem to be true, but the docs imply that you can upgrade from solo -> solo#2019-07-1100:27Drew Verleeupgrade the CTVersion#2019-07-1100:27Drew Verleethe goal here is to get the newer features#2019-07-1100:27ghadiyes the templates are public#2019-07-1100:27ghadihttps://docs.datomic.com/cloud/releases.html#2019-07-1100:27ghadijust got a version bump a few days ago#2019-07-1100:27ghadiyesterday maybe#2019-07-1100:28ghadiI am testing the latest release on a dev cluster before rolling it out to prod in a few days#2019-07-1100:29Drew Verleeso does "upgrading" in the docs i link refer to moving from solo->prod, and not changing template versions?#2019-07-1100:30Drew Verleeor put another way, if i just wanted the latest features for my solo topology. what would i do 🙂#2019-07-1108:30stask@U0DJ4T5U1 if it’s your first upgrade, just follow the steps in https://docs.datomic.com/cloud/operation/upgrading.html#first-upgrade
Storage template is the same for both Solo and Production as far as i know. Just choose Solo for the Compute template.#2019-07-1112:39Drew VerleeBut your have to choose a larger instance, I assume overall this is more money per month#2019-07-1113:49Jacob O'Bryant@U0DJ4T5U1 I did the "first upgrade ever" (stayed on solo), and my datomic set up currently consists of a t2.small and a t2.nano instance--I think this was the same as before, but I don't remember. What instances do you have right now? In any case, my monthly aws billing estimate is still the same (though I only upgraded ~1 week ago)#2019-07-1113:55Joe Lane@U0DJ4T5U1 Maybe the language is overloaded when discussing "upgrading" (transitioning) from a solo topology to a production topology. If we use the word "upgrading" to mean getting latest features (version increment) it is absolutely possible to "version increment" a running solo system while keeping it a solo topology, i've done it a dozen times.#2019-07-1201:41Drew Verleethanks everyone. i understand now that i can still choose the solot topology and have done so.
Currently i'm not seeing the option to reuse existing storage as specified here: https://docs.datomic.com/cloud/operation/upgrading.html#first-upgrade#2019-07-1110:07conanHi, we need to run some data migrations in Datomic Cloud. How would you go about this? We're thinking that Ions may be the solution, is that the right approach?#2019-07-1116:01shaun-mahood@conan I've done a bunch of migrations from local databases to the cloud just using the socks-proxy. https://youtu.be/oOON--g1PyU is a great reference for how you could approach the problem.#2019-07-1116:05conanSo I need to transform data in the way I would do using db functions in on-prem. If i read data, calculate txes and write them, i may leave my data in an inconsistent state#2019-07-1121:24chrisblomwould using :db/cas be an option, to check that the state is still valid?#2019-07-1116:05conanIt's not entirely clear to me how I do this in cloud#2019-07-1116:22shaun-mahoodAhh - check out https://docs.datomic.com/cloud/transactions/transaction-functions.html#classpath and see if that gives you what you need.#2019-07-1201:50Drew Verleeim upgrading my datomic stack for the first time, the instructions say to set "resuse existing storage" to True. But i dont see this option anywhere.#2019-07-1204:40eoliphantYou’ve pasted in the URL on the Create Stack Dialog, and are on the first page of inputs for the storage stack? It’s the second option down#2019-07-1205:51Drew VerleeI did#2019-07-1222:46Drew Verlee#2019-07-1222:47Drew Verlee#2019-07-1222:57Drew Verleeok, thats what you get if i enter the URL from solo topology on https://docs.datomic.com/cloud/releases.html
but if i enter the one for storage i get the option to re-use existing storage#2019-07-1222:58Drew Verleethe instructions say
> for the Storage Stack you want from the release page
I thought solo, production, storage were all examples of "storage stacks". if not, then what else is one?#2019-07-1216:49Mark Addleman@jaret fyi - I just tried to deploy a Datomic Solo topology from the AWS Marketplace. The AWS Marketplace UI allowed me to NOT enter a Key Pair. Subsequently, the Cloudformation Create Stack operation fails with a somewhat obscure message. Not sure if this is something you have control over#2019-07-1216:56jaretYeah, sorry Mark that’s a limitation on AWS’s side. We’ve asked/lodged requests to be able to require that field to launch, but its not allowed.#2019-07-1220:01Mark AddlemanNo worries. That's what I figured#2019-07-1219:26Jacob O'Bryant@jaret I'd really appreciate it if you/someone could take a look at this, unless I'm mistaken it's a very serious bug with the new composite tuple feature: https://forum.datomic.com/t/upsert-behavior-with-composite-tuple-key/1075/3
thanks. I'm guessing that bug is the root cause of this too: https://forum.datomic.com/t/bug-in-db-ensures-boolean-attr-handling/1073#2019-07-1616:08jaretThank you for the report. Thanks to your example, we have identified an issue with the treatment of false in tuples. We have a fix in the works for the next release. However, upsert does require that you specify the unique key. You can use the entity id or if you do want to identify the entity by a unique key then you have to specify the key (not its constituents). We’re going to update the docs to better address this. I have also updated your posts.#2019-07-1319:18Drew Verleehttps://docs.datomic.com/cloud/operation/upgrading.html#org4ebe4b2 has a broken link "environment map". i already emailed support, if by any chance someone knows what it should be i would appreciate knowing so i can keep moving forward 🙂#2019-07-1403:10Drew VerleeThey fixed the link.#2019-07-1420:57joshkhi have a RESTful API written in Clojure with no dependency on Datomic Cloud (although many of services make use of Cloud and Ions, so the infrastructure exists). is a new Query Group, http-direct, and Ions combination still a good solution for deploying it?#2019-07-1421:03joshkhthe crux (ha ha) of my question is regarding micro services. in this case it's a simple API and Ions makes it so easy to deploy, but for every micro service i face another ~$8 a month for a single t2.medium to support its new Query Group, which is the minimum EC2 instance size. with many services (including dev, stg, and prod targets) it adds up.#2019-07-1503:27eoliphantFor us part of the value is not jumping to microservices right away. We’ve a convention for separating out what are essentially bounded contexts into separate projects, and a few scripts to check for architectural conformance. So we start with a ‘managed monolith’, but can pull stuff out if and only if it’s really necessary. Each one uses it’s own db, etc so breaking them out into QG’s when we hit that point isn’t typically a big deal#2019-07-1514:35dmarjenburghDoes datomic ions support cross-account deployments? We have a dev/test account and an acc/prod account. We want to do an ion push in dev/test and deploy the generated artifact in acc/prod#2019-07-1514:37marshallno, you’d need to push to prod#2019-07-1516:07calebpHi, I’m looking for information on error recovery practices for Datomic cloud. Not sure that’s the right terminology, but one problem that particularly worries me is what if someone accidentally calls delete-database? I couldn’t see a way in https://docs.datomic.com/cloud/operation/access-control.html to prevent this and if I didn’t have an backup, my company’s data would just be gone. I’ve been following this thread https://forum.datomic.com/t/cloud-backups-recovery/370/12, but haven’t seen any details there.#2019-07-1607:11TuomasI’m trying out datomic cloud and I’m pretty new to this kind of stuff. Tried to launch to eu-west-1, but failed. After digging around I discovered the compute ami in solo-compute-template-8772-480-ci.template Mappings.RegionMap.eu-west-1.Datomic is invalid. Decided to launch to eu-central-1 because it’s ami mapping seems correct, but thought I should also report this. Any idea where these kind of reports should go to?#2019-07-1616:11marshallWe have filed a support ticket with AWS regarding that issue @koivistoinen.tuomas#2019-07-1618:36pvillegas12I’m using Ions and Datomic Cloud. I would like to understand how many requests I can handle per second. Is the clojure web app I expose as an ion a single process? Will it be threaded in some way? Can I configure this behavior?#2019-07-1618:37Joe LaneThat depends entirely upon what those requests do.#2019-07-1618:38pvillegas12A request may do a 10s job, so trying to understand if that will block the entire API I am exposing.#2019-07-1708:03sooheonWhat, if any, are the differences between missing? and not in the following?#2019-07-1708:04sooheon(d/q '[:find (pull ?feed [*])
:where
[?feed :rss-feed/url]
[(missing? $ ?feed :rss-feed/etag)]]
db)
(d/q '[:find (pull ?feed [*])
:where
[?feed :rss-feed/url]
(not [?feed :rss-feed/etag])]
db)
#2019-07-1708:05sooheonThe only thing I’ve noticed is that missing? doesn’t complain when you ask for a never-transacted attribute (i.e. (missing? $ ?e :random-made-up-or-misspelled-kw))#2019-07-1709:11dmarjenburghCan you configure the path to ion-config.edn? I have slightly different configurations per environment.#2019-07-1710:28holgeris it possible that two jvm processes share the same valcache (directory) for a short period of time? our deployment first starts a new version before it stops the old one#2019-07-1918:00jaretYes, I believe valcache directory can be shared, but it has not been tested. Not to put you out on a limb, but if you notice any issues could you send them to me in support? And I encourage you to test in non-production first.#2019-07-1918:01jaretI’d also be interested to see metrics from your test system. Given that we haven’t tested this configuration it remains unsupported, but theoretically could work.#2019-07-2215:57holgerThanks! In case we decide to go that route, I'll let you know!#2019-07-1820:57grzmAnyone have a reference or experience setting up a Pedestal ion endpoint somewhere other than at the root of the domain? Frex, I want to set up the endpoint at "/api/v1/{proxy+}" rather than at "/". From my cursory trials, looks like I end up needing to include "/api/v1" in each of my routes. Wondering if there's a way around that.#2019-07-1821:09Joe Lane@grzm Are you able to use http-direct?#2019-07-1821:11grzmHaven’t tried yet: still using a lambda/ion#2019-07-1821:12Joe LaneWanna zoom?#2019-07-1821:14grzmWhat is this, a Nissan commercial?#2019-07-1906:54Simon O.Beginner Q: Given the sample schema, is it possible to just retract entity [:step/name "step2"] in line27 from entity [:lad/name "lado"]without maybe having to removing :db/ident :lad/step, adding it back, and subsequently filling it with needed collection of :step/name? and how...#2019-07-1913:07donaldball[[:db/retract [:lad/name "lado"] :lad/step [:step/name "step2"]]] will retract the datom asserting a ref between lado and step2.
[[:db/retractEntity [:step/name "step2"]]] should retract all datoms that refer to step, including that ref from lado#2019-07-1914:01Simon O.First step does it. thanks#2019-07-1918:13daniel.spanielDoes anyone know how to configure my ion to issue an http request to sendgrid? Do I need configure a NAT or Egress-only Gateway from my VPC? And if so, is there some example of doing that?#2019-07-1918:14Joe Laneclj-http should be able to make an outbound http request.#2019-07-1918:15daniel.spanielfrom within my ion code @lanejo01?#2019-07-1918:15Joe Laneyup#2019-07-1918:17Joe LaneI do that in one of my production systems with twilio#2019-07-1918:17daniel.spanielyeah, same here#2019-07-1918:17daniel.spanielIt might be a delay then from twilio#2019-07-1918:19daniel.spanielthanks @lanejo01!#2019-07-1918:19Joe Lanenp, have fun!#2019-07-1918:21Joe Lane@dansudol One thing I ran into was if the ion was cold then sometimes twilio would timeout on callbacks because twilio has a 5 second timeout.#2019-07-2216:21ghadinot terrible at all, have you seen codeq?#2019-07-2216:53hadilsHi! Any suggestions on testing Lambda/APIGW ions on a local MacOS laptop? I am trying out AWS SAM. How do I build a deployment package for my application locally? Is it just the zip file that I would deploy to Datomic Cloud?#2019-07-2312:51eoliphantThere’s no ‘package’ per se with ions, though the ‘push’ does shoot your code as is + dependencies up to an S3 bucket/folder.
I’d check out Stu’s videos on the typical workflow. but in general it’s very much in line with your typical REPL-driven/oriented workflow. You can test/exercise your generic pure funcs as is, you can connect to the db in question from the repl, most ion/datomic funcs that can be ‘embedded’, like transaction functions, are pure can (and should) be t tried out directly. At that point, you can then push, then interactively exercise them on the server.
I’m literally doing that right now. Helping one of my devs optimize some stuff, so I created a new transaction func, spec’d and tested it totally client side, then ‘allowed’ and pushed it, and ran some actual transactions that referenced it.#2019-07-2317:48hadilsSo you avoided the whole SAM local workflow, then?#2019-07-2915:59eoliphantsorry just saw lol, yeah, there’s less need for it IMO. while tx, query, etc funs do have to be on the server at some point. You can generally do a ton of testing, etc with them locally, so by the time you actually push them, you’re pretty confident that they’re doing what you expect.#2019-07-2217:57joshkhmust query functions be deployed to the main (cloud) Compute node, or can they be deployed as part of Query Groups?#2019-07-2218:01joshkhupdate: yes. found it in the docs 🙂#2019-07-2218:01ghadiAnywhere :)#2019-07-2218:06joshkhhmm, are you sure? i have a query group that makes use of query functions. i recently removed them from the main compute group and now the query groups fail.#2019-07-2218:08joshkhby removed i mean that the query functions were defined in both Ions projects due to forking the code base. when i removed them from the main compute group's configuration and deployed to the main compute group the query groups then failed.#2019-07-2218:09marshall@joshkh failed to do what?#2019-07-2218:09marshallyou can definitely have a different set of query functions on your primary group than on a given query group#2019-07-2218:10marshallbut you can only invoke them if you are connected to that query group#2019-07-2218:10joshkhyup, that's what i'm thinking. i might be deploying to the query group but connecting to the main compute group.#2019-07-2219:05joshkhupon further testing, it looks like the client's :endpoint value overrides the client's :query-group value, but only when running locally.#2019-07-2221:26PB@ghadi I have not seen codeq#2019-07-2312:42eoliphantHey guys, I’m running into an issue trying to deploy from my CI/CD server, I’m getting the following error
“Unable to find a unique code bucket. Make sure that you are running\\nDatomic Cloud in the region you are deploying to”
Definitely in the same region, so not sure what else to look at#2019-07-2312:47AdrienThat's because Datomic releases are on an S3 bucket in another region and you cannot do cross-region S3 copies.
There is a discussion concerning this issue on the forum: https://forum.datomic.com/t/ions-push-deployments-automation-issues/715#2019-07-2312:48AdrienIn the last answer you have a workaround using VPC with a NAT gateway#2019-07-2312:53eoliphantRight, right I’ve seen that, but in my case, everything is in us-east-1. The script actually works fine for my dev env, but is throwing that for our int-env. Our ‘shared’ account/vpc, as well as the ones for dev and integration are all in us-east-1#2019-07-2316:54calebpI inadvertently converted one of my solo systems to a production system by upgrading it with the production compute template. Is it OK to leave the storage stack, delete the compute stack and recreate the compute stack with the solo compute template?#2019-07-2316:58calebpAssuming this is OK since solo and production use the same storage stack template#2019-07-2320:17matthavenerIs there any performance benefit of pull-many vs mapping over pull ?#2019-07-2401:13favilaPull-many will parallelize if it can; map over pull will not#2019-07-2401:13favilaPull many is like pull in a query#2019-07-2417:29matthavenerthanks as always#2019-07-2407:28fdserrOn-prem pro: is it possible and 200% safe to use a single set of infra for several transactors (with different dbs) ? DDB table, role set, log bucket.#2019-07-2421:42genekimHello! After six months of dabbling with Datomic Cloud on my laptop, I'm ready to use it in a personal project or two! But after almost two hours of Googling, I've hit a problem...
What is easiest way to connect to a Datomic Cloud instance from something like Heroku? There's not an easy/obvious way to use datomic-socks-proxy...
In the ideal, I'd love to be able to connect to Datomic on Heroku without a need for a proxy or sidecar (e.g., like a simple call to connect() with a Postgres/MySQL-style connection string?). I'm trying to simplify my life, so not having to set up a docker container or Kubernetes sidecar would be awesome.
For similar reasons, not having to learn API Gateway and IAM at the same time as learning Datomic would be a plus. :) (Because the AWS feature screen has always scared me, I've stuck to Heroku and Google GCP/GKE, for better or worse.)
Any advice? Many thanks in advance!!!! #2019-07-2422:44eoliphantThere are possible ways, but it may be more trouble than it’s worth. It’s definitely going to be easier to work on something in AWS. if you run elsewhere you’d need to have AWS credentials, run the proxy, etc etc.
What kind of app are you planning If you use ions, there’s a pretty easy, one time setup for API gateway, that’s clearly outlined in the docs. once it’s setup for your ‘entry point’ ion there’s no need to mess with it.
IF you’re planning a separate app, that just connects as a client to the db, then again, you’re gonna need to do some stuff around the networking and what have you, as well as manage aws access keys on heroku. not 100% sure but I’d bet the complexity of getting all that working on heroku might be a equal to or even more than just getting ions, etc going#2019-07-2717:39genekim@U380J7PAQ Thanks for the thoughtful question — was pondering this, because I think it does inform what the right decision is.
I have an app that tracks collects lots of data on books that’s been running for 4 years, data currently stored in MySQL (originally totally accessed thru Ruby ActiveRecord). Now all the data is collected and accessed thru Clojure web app.
I want to store new book metadata, like publishers, categories, in Datomic, because I’m fatigued by SQL database migrations. And I think the fluid way that the schema can be changed in Datomic is super appealing to me.
I imagine that one Ion REST API entry point invoke could be used to do operations like :add-publisher, :update-publisher, :delete-publisher, etc...
And then that endpoint is called by an app that runs anywhere, maybe authenticated by a certificate, secret or something?
Is that thinking reasonable? Did I miss anything huge? Thx for the great question!#2019-07-2717:47genekim@U380J7PAQ Am I correct in thinking that I’d call the API Gateway endpoint with something like this?
https://github.com/jerben/clj-aws-sign#2019-07-2717:47eoliphantHmm.. So Ions absolutely make your last bit far more palatable. you could leave everything else as-is, then just create and deploy your API, exposing it via API gateway, that can be called from anywhere, and API gateway natively supports stuff like API keys for access without too much additional fuss.
So is your plan to migrate from the MySQL/Ruby stuff? Wasn’t clear on whether this new piece is solely complementary or your new direction overall. I can tell you for sure, that if you’ve already drunk the Clojure/Datomic kool-aid, that you’ll eventually end up with far fewer moving parts if you just move it all that way#2019-07-2717:48eoliphantAPI gateway supports a variety of authentication methods. If you just need to secure it ‘system to system’ it probably makes more sense to just use an API key. Then there’s no need to sign, etc, you just send the key in a header#2019-07-2717:49eoliphantif the API needs to ‘know’ say the identity of the user making the call, then that’s when you get into more sophisticated use cases, like passing JWT’s around or something#2019-07-2717:51eoliphanthere’s the info on adding that: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html#2019-07-2717:52eoliphantthere’s a bunch of stuff in there that supports the more tpyical use case of issuing them to ‘customers’ etc, but in your case it’s really just a single one for your app running in heroku#2019-07-2717:58genekimHoly cow, @U380J7PAQ — this is SOOOO helpful. I owe you drinks for remainder of my lifetime for this — just say when and where! I can’t tell you how many mysteries you’re solving for me!
(But first on MySQL: I’m inclined to leave all the data there. There’s GBs of it, and no real reason to change it — lots of code read and write to it just fine.)
Wow, that link is great! The idea of just passing in a secret string is just my speed. 🙂
I’m looking at the “EXAMPLE: Create a Request-Based Lambda Authorizer Function” example right now at https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html
Am I on the right track? Thx again!!!!#2019-07-2718:04eoliphantlol, no problem at all. This community is awesome. I’ve gotten tons of help Just pay it forward 🙂
Ok cool regarding your existing data, that’s certainly a viable approach, and you can also go ‘full AWS’ even with that setup, dropping your existing MySQL db into MySQL on AWS RDS or even their MySQL compatible Aurora if you need the scalability. Nice thing about Datomic being a bit of an ‘un-database’ is that its also easy to munge up results from other sources if necessary, that may or may not be applicable to your use case
yeah check out the authorizer, though in your case it may not be necessary, they are more useful when say you’re handing out keys to customers, and they perhaps have different levels of service. Where say, the user who corresponds to key XXX has a basic subscription and only gets 1000 API calls a month and only certain API’s vs the user for key YYY that has unlimited access. In your case, you just want to prohibit ‘open’ access which the key more or less does on its own#2019-07-2718:13genekimThis is awesome! I’ll give it a shot this weekend — thanks again!!!! I can’t tell you how timely and spot-on it is! And I’ll keep you posted, hopefully with a screenshot of a successful CURL request and response! 🙂#2019-07-2718:14eoliphantno probs at all, let me know if you need any more info#2019-07-2718:52genekimWow!!! I got my first lambda function running, and managed to get an API key associated with it! Amazing! THANK YOU!
2015-MBP genekim$ curl -H 'x-api-key: xxx'
"Hello from Lambda!"
Next step… Follow the Ions tutorial! Wow! :exploding_head:#2019-07-2808:01genekimThanks to @U380J7PAQ encouragement and help, I’ve gotten an AWS API Gateway and my first tables and queries set up. But I’m having problems getting the com.datomic/ion deps downloaded. (I’m using the ion-starter/deps.edn file.)
I’m getting an error very similar to what was reported here: https://forum.datomic.com/t/issue-retrieving-com-datomic-ion-dependency-from-datomic-cloud-maven-repo/508
I can list my own S3 buckets, but when I get a permission denied error when I try to list the needed S3 bucket where the ion deps are stored:
2015-MBP:hodur-books genekim$ aws s3 ls
An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
I’m sure this is easy, but I’m just too new to AWS to see it… Thank you!!!#2019-07-2808:13genekimWait… hang on… I can download the .jar file…#2019-07-2808:34genekimOkay, doing my first push and deploy! (And then heading to bed. I have an early start tomorrow! 🙂 This is super exciting, @U380J7PAQ!#2019-07-2915:56eoliphantGlad to hear it’s working 🙂#2019-07-2421:48John MillerThe Datomic cloud client accepts a set for a collection binding, but not a sorted-set. That seems like a bug? I’m using com.datomic/client-cloud {:mvn/version "0.8.78"}#2019-07-2422:38shaun-mahood@genekim The only easy-ish option I know of is using Ions - I think I would love to use HTTP Direct but it's only available with a production topology#2019-07-2522:16genekimThanks @U054BUGT4! Based on this, I’m likely going to run this inside of GKE, with the socks proxy running inside the container.
(Right now, I’m super-parsimonious about anything new I learn. Avoiding yaks altogether, let alone shaving them. :)#2019-07-2522:17shaun-mahoodOh yeah, yak-avoidance is so important.#2019-07-2522:19shaun-mahoodI'm using the socks proxy to run a local server, which connects to my datomic cloud instance and a local database, and I've only had one issue with it over the past few months - our network blipped a bit and I had to reset some network gear to fix it. No idea how it's going to handle running inside GKE, though.#2019-07-2522:26shaun-mahoodI'm gradually migrating things to ions, though, moving functions from my local ring server to ions one at a time.#2019-07-2512:04keesterbruggeI'm trying to do a nested upsert, but this doesn't seem to be possible. I found one technique to do a nested insert, and I found a different technique to do a nested update, there doesn't seem to be a technique that will do an insert or an update (upsert) depending on the state of the database. Is this correct?
Given the following schema
(def schema
[{:db/ident :day
:db/unique :db.unique/identity
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one}
{:db/ident :metric/day
:db/unique :db.unique/identity
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :rev
:db/valueType :db.type/double
:db/cardinality :db.cardinality/one} ])
;; And some setup
(require '[datomic.api :as d])
(def db-uri "datomic:")
(d/create-database db-uri)
(def conn (d/connect db-uri))
@(d/transact conn schema)
With an empty database, the following inserts 3 datoms
@(d/transact conn [{:metric/day {:day 3} :rev 1.4}])
When I want to do an update to the previously created entries, the following works:
@(d/transact conn [{:metric/day [:day 3] :rev 1.5}])
However the previous insert structure doesn't and gives and error:
@(d/transact conn [{:metric/day {:day 3} :rev 1.5}]) ;=>
<error-here>
So this last variation is only a way to do a nested insert not an update. If we try the nested update variation on an empty database this fails too:
@(d/transact conn [{:metric/day [:day 3] :rev 1.5}])
<error-here>
The more verbose version with tempid's doens't work either:
(let [day-id (d/tempid :db.part/user)]
@(d/transact conn [{:db/id day-id :day 3}
{:metric/day day-id
:rev 1.1}]))
Am I missing how upserting could work in this nested situation or is this a limitation of Datomic by design? Any help is greatly apreciated!#2019-07-2515:04donaldballI have written my own fns to assert a possibly existing tree of data into the database.#2019-07-2515:07donaldballI’m curious about datalog rules. Sometimes I have 4-5 rules with the same name, it’s not clear which one(s) are matching, and it’s a little tedious to debug. A more debuggable form might be to give each rule a distinct name and use an or clause for each case in the general rule. Is anyone aware of the performance implications of this approach?#2019-07-2520:44Drew VerleeWould it be correct to say datomic employees forward chaining logic?#2019-07-2521:09Joe LaneI don't think so#2019-07-2521:09Joe LaneThat would be a rules engine, if i'm not mistaken.#2019-07-2521:14Drew VerleeRight, I meant to say backwards chaining 🤔#2019-07-2612:24timeyyyHi.
I'm curious as to the purpose of the created route53 dns entries when using datomic cloud.
Is this for billing or something? Why is this created under http://xyz-datomic.net.
Is this designed to be configured for my application use?#2019-07-2614:23Joe Lane@timeyyy_da_man I think those are private routes for datomic to resolve machines within the vpc.#2019-07-2614:24Joe LaneI don't believe it's for application use.#2019-07-2620:56fmnoiseis there a way to list datoms by given transaction id in datomic on-prem?#2019-07-2621:19souenzzo(d/q '[:find ?e ?a ?v ?tx ?op
:in $ ?tx
:where
[?i :db/ident ?a]
[?e ?a ?v ?tx ?op]]
(d/history db) tx)
#2019-07-2621:25fmnoisethanks @U2J4FRT2T, I was using similar query but got Insufficient binding error, [?i :db/ident ?a] does the trick 🎉#2019-07-2621:27souenzzoNot sure if in peer API is faster access it from raw datoms or some other API
ATM I'm on client API#2019-07-2621:33benoit@U4BEW7F61 I generally use tx-data https://docs.datomic.com/on-prem/log.html#log-in-query#2019-07-2912:08jaihindhreddyIf a Datomic DB contains [e :a v 40 true], is asserting [e :a v 42 true] a no-op?#2019-07-2912:17souenzzo@jaihindhreddy it will generate just the [e :db/txInstant #inst"now" true]#2019-07-2912:21jaihindhreddyGot it. And that would mean, we would know that fact(s) that were already true were reasserted, but not which ones exactly.#2019-07-2912:21jaihindhreddyMakes sense.#2019-07-3015:10tony.kayIs anyone aware of Datomic performance issues with latest JDK 11? We’re seeing some very poor query performance after moving from JDK 8 to 11 on Datomic 0.9.5697#2019-07-3015:12matthavenercould be https://bugs.openjdk.java.net/browse/JDK-8219233 ?#2019-07-3015:12tony.kayI looked at that#2019-07-3015:12marshall@tony.kay I would recommend moving to the latest release
several dependency updates (https://docs.datomic.com/on-prem/changes.html#0.9.5927) include changes to libraries that may impact jdk11 support#2019-07-3015:15tony.kayok, we’ll try that.#2019-07-3015:21Alex Miller (Clojure team)it's highly unlikely to be that jdk issue - that primarily affects code loaded via user.clj#2019-07-3015:22Alex Miller (Clojure team)but fyi, Clojure 1.10.1 includes a Clojure-side mitigation for that#2019-07-3018:28joshkhwhat's the difference between the com.datomic/ion-dev and com.datomic/ion libraries? i tend to only use the ion-dev library in my projects, and given that ion has its own release cycle i'm wondering if i missed something in the docs.#2019-07-3018:31marshall@joshkh both are required in your ion project
ion-dev is used for push/deploy/etc
‘ion’ is required for Ion projects and also includes the parameter helper functions, the cast namespace, etc#2019-07-3018:32marshallalso the ionize function#2019-07-3018:32marshallhttps://github.com/Datomic/ion-starter/blob/master/deps.edn#2019-07-3018:32marshalland https://github.com/Datomic/ion-event-example#2019-07-3018:33joshkhah, thanks marshall. i must have dropped ion when i switched to http-direct#2019-07-3018:39joshkhno, that's not true. i'm still using it to fetch environment parameters. i must have crossed some mental wires when upgrading my various query groups. 🙂 thanks again#2019-07-3115:37hadilsAnyone have any experience with using core.async with Lambda Ions? I would like to know if there are any issues with Lambdas timing out with processes running in the background. Thanks!#2019-07-3115:58jarethttps://forum.datomic.com/t/datomic-0-9-5951-us-now-available/1103#2019-07-3115:59jaretDatomic On Prem 0.9.5951 Now available.#2019-07-3116:37grzmI've noticed a dramatic increase in BeforeInstall times when deploying Datomic Cloud (from ~ 1 minute to over 2 minutes). Everything else is on the order of a second. Any thoughts on what might have caused that? The commit when it changed was only a change in deps.edn, where I updated deps.edn to reflect the conflicts reported when deploying.#2019-07-3117:01Joe Lane@jaret @marshall Doc suggestion related to the cloud tuples example.
The name of the ident is :reg/semester+course+student but the actual order of the tuple is different, its [:reg/course :reg/semester :reg/student] and I found it difficult to keep the differing orders straight in my head when learning tuples.
Found at:
https://docs.datomic.com/cloud/schema/schema-reference.html#composite-tuples
{:db/ident :reg/semester+course+student
:db/valueType :db.type/tuple
:db/tupleAttrs [:reg/course :reg/semester :reg/student]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
#2019-07-3117:01jaretI’ll switch that around in the example#2019-07-3117:01Joe LaneThanks!#2019-08-0100:43QuestI'm just trying to get Datomic Console running against a local dev transactor -- but whenever I hit localhost:8080/browse, I get a Jetty 503.
I noticed this nasty exception in the Datomic logs. Does anyone recognize it?#2019-08-0100:45QuestRunning on OSX, datomic-pro-0.9.5930 with datomic-console-0.1.216 installed#2019-08-0103:08QuestFigured it out -- missed the meaning of this text at https://my.datomic.com/downloads/console
The Datomic Console is included in the Datomic Pro distribution. Datomic Free users can download it below.
= installing this old version on top of a Datomic Pro release will break the console.
Reinstalled Datomic to undo the damage, console is working fine now 👍#2019-08-0120:49Quest^ Scratch the report on datomic-pro 0.9.5951 failing to download -- couldn't repro after blowing away my .m2, so guessing it was something odd on local.#2019-08-0214:14matthavenerIs the “single parent” policy implied by isComponent true attributes enforced by datomic? It seems like I can add another parent to a child, and then the backref behavior is really strange#2019-08-0214:22matthavenerhere’s what i’m seeing (both asserts pass)#2019-08-0214:22matthavenerhttps://gist.github.com/matthavener/4e61cf3db97fde90cde56af0d556ba6b#2019-08-0214:22souenzzoYes, you can
If :foo/bar isComponent and you insert [2 :foo/bar 1] and [3 :foo/bar 1]
- if you retractEntity 3, 1 will be retracted
- if you retractEntity 2, 1 will be retracted
- in pull/entity API if you ask for for :foo/_bar from 1, it will return just 2 or just 3 "randomly"
- in query, it should not effect#2019-08-0214:23matthavenerthanks @souenzzo 🙂, that’s exactly what I’m seeing but the semantics were just confusing at first#2019-08-0214:25souenzzoI think that it is discouraged, but this behavior will not change AFIK#2019-08-0214:27matthaveneryeah, having a “consistent view” of the db and a backref that is “random” doesn’t exactly jive#2019-08-0214:27matthavenerjust have to add more validation to my txns to ensure ever child only has one parent#2019-08-0214:29souenzzonew ensure / spec features should help
important to say that it's a stable "random"#2019-08-0214:56matthavenerstable for a given value of db?#2019-08-0214:56matthaveneruntil a reindex or something?#2019-08-0215:12souenzzoI'd rather leave it to someone on the datomic team to answer that.#2019-08-0216:03nilpunningDoes anyone know if datomi.api/gc-storage collects just on the database specified in the connection or across all databases in the Datomic deployment?
https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/gc-storage#2019-08-0222:41rgorrepati#2019-08-0222:41rgorrepatiCan someone tell me why the above wouldn’t work?#2019-08-0300:46souenzzo@rgorrepati this exception is because this value isn't printable
if you try to print/str x it should throw too#2019-08-0300:46rgorrepati@souenzzo I can print x though#2019-08-0300:47rgorrepati#2019-08-0302:22souenzzoreal wired. no idea.#2019-08-0302:22souenzzoIt's a "raw" REPL or Intellij/nREPL/CIDER ?#2019-08-0514:38matthavenerthe docs for untuple reference something called :db/composite, but I don’t see it on the reference page https://docs.datomic.com/on-prem/query.html#untuple https://docs.datomic.com/on-prem/transactions.html#dbfn-composite#2019-08-0516:19marshall@matthavener that’s a typo. i’ll fix it#2019-08-0516:28marshallfixed#2019-08-0613:08eoliphantHi, running into a weird issue on datomic cloud. Where comparisons on float fields are failing to match, in the sample below from one of my devs, test-var is something like 123456.0. Even grabbing it directly from the db and handing it right back in the next query, isn’t matching.
(def test-var (first (ffirst (d/q '[:find (take 10 ?nga)
:where [_ :nga-can/nga-id ?nga]]
db))))
(d/q '[:find ?e
:where [?e :nga-can/nga-id test-var]]
db)
#2019-08-0613:14souenzzotest-var is quoted in this code.#2019-08-0613:17eoliphantah hell copied and pasted that from him, i;m on my phone lol. missed that thanks will dbl check#2019-08-0613:13Alex Miller (Clojure team)comparisons on floats are generally problematic due to imprecision (this is a generic issue in any language using IEEE 754)#2019-08-0613:13Alex Miller (Clojure team)that may not be your problem, but just something to be aware of#2019-08-0613:16eoliphantyeah I thought that that might be the case. But, even using the ‘same’ value seems to be problematic.#2019-08-0613:27matthavener@eoliphant could it be NaN? that will fail even self equality tests#2019-08-0613:43eoliphantlooks like it’s good ol IEEE floats at the end of the day#2019-08-0614:11jarethttps://twitter.com/datomic_team/status/1158742202270572546#2019-08-0617:17m0smithHi all, how do I retract all entities match some filter. For example, the entitiy has a :date attr and I want to retract all entities where :date is in a given date range#2019-08-0617:36matthavener(d/transact conn (map (fn [e] [:db.fn/retractEntity (:db/id e)]) (filter #(in-range? (:date %)) entities))) ?#2019-08-0617:37matthavenerthere’s not really any notion of DELETE FROM table WHERE date >= ... if that’s what you’re looking for#2019-08-0618:05John MillerCould somebody sanity check a model I came up with?
We need to use a full-text search to find entities, and then return a bunch of attributes that are stored in Datomic. So I have created a query function as an ion that calls out to cloudsearch and returns a set of ids and scores. I then look up the id normally. The query looks something like this:
(d/q '[:find (pull ?e [*]) ?score
:in $ ?query
:where [(ions.cloudsearch/find-by-query ?query) [[?id ?score] ...]]
[?e :generic/id ?id]]
db
query)
So two quesions - Is this a reasonable model? If so, are there any resources we might need to worry about if Cloudsearch take a long time to respond (e.g. JVM threads? Memory? Certainly not CPU since it would be io bound). We’re expecting potentially 10's of requests per second, and CS typically responds in 10's of ms but sometimes stalls and takes multiple seconds. When that happens, 40-50 requests might build up before CS responds.
We saw a rash of :busy anomalies once since we started trying to make this work. It may be unrelated, but I wanted to see if anybody knew of any potential pitfalls before we go too far down this path.#2019-08-0618:28m0smith@matthavener Thanks. I wish google were better at finding this information#2019-08-0618:31m0smithI would do a separate query to get the "entities"?#2019-08-0618:37csmyes, you would use a separate query in that case. Though remember that those entities aren’t removed from the database, the history is intact; what you’re trying might be better done in your queries with a rule that filters :date after the date you’re interested in#2019-08-0618:37csmif you’re attempting to get rid of old data to save space, datomic might not be the right fit for your problem#2019-08-0619:00m0smithFor now I think I do want to retract them. we want the history so we can always trace the data. thanks again#2019-08-0619:05Joe Lane@jmiller If your cloudsearch request blocks I would say its a bad idea to do that in the query. Instead, why not issue the cloudsearch query, then take the results and pass them into the datomic query? We have a homegrown datomic cloud lucene integration that does something similar to this. Granted, the lucene indexes are relatively small (3gb) so its low overhead.#2019-08-0620:31John MillerThanks for the response, Joe. It generally only stalls for a couple seconds so my hope is that Datomic handles it fine. I’ve successfully done similar things in UDFs in Mysql, but that isn’t built on Java so I am concerned that the gotchas are different.#2019-08-0701:04puzzlerThe datomic website says datomic requires jdk 7 or 8. Still true, or is website out of date? https://docs.datomic.com/on-prem/get-datomic.html#2019-08-0712:06marshall@puzzler On-Prem ?#2019-08-0719:27jarethttps://forum.datomic.com/t/datomic-cloud-482-8794/1117#2019-08-0719:39jaretAlso an important notice on the latest release:#2019-08-0719:39jarethttps://forum.datomic.com/t/issue-with-t2-instances-important/1118#2019-08-0720:15joshkhIs there a performance / speed benefit when querying on attributes that reference a :db/ident rather than a keyword attribute value? For example:
; schema
[
; installed :db/ident
{:db/ident :season/winter}
; a reference attribute to point to the :season/winter :db/ident
{:db/ident :year/season-ref :db/cardinality :db.cardinality/one :db/valueType :db.type/ref}
; a generic keyword attribute
{:db/ident :year/season-kw :db/cardinality :db.cardinality/one :db/valueType :db.type/keyword}
]
#2019-08-0722:09patChanges link for latest free release is broke#2019-08-0806:50karlmikkoWe just had an alarm from our datomic transactor AlarmLogWriteTimedOut however I can't seem to find from google searches out what would causes this alarm. All I have been able to find is https://docs.datomic.com/on-prem/monitoring.html#alarms which says to contact datomic support. I thought I would ask here incase other had seen this and to potentially share the knowledge once we find out what causes this.#2019-08-0813:27jaret@karlmikko I’d definitely log a case to support, especially if you’re still seeing errors. But generally that alarm indicates the transactor timed out waiting for storage to acknowledge a write, specifically to the transaction log.#2019-08-0822:49karlmikkothanks @U1QJACBUM - I will lodge a cast today - the thing i was a bit confused about was the term log as it could be the log file on disc or the transaction log.#2019-08-0900:46karlmikkoI managed to find the exception thrown in the logs and it was a timeout talking to dynamodb, and looking at dynamodb metrics at the time there was not failures and plenty of read/write capactity.#2019-08-0813:28jaretAs an update to the issue reported yesterday with t2 instances. We resolved the issue by working with AWS last night.#2019-08-0813:28jarethttps://forum.datomic.com/t/resolved-issue-with-t2-instances/1118#2019-08-0816:44m0smithI keep running into problems where library code trying to determine whether to use the
Peer or Client API gets it wrong. See https://software-ninja-ninja.blogspot.com/2019/08/datomic-ions-lget-does-not-exist.html My question here is, is there a well defined way to determine which API the code is using? Followup question: Are more people using the Peer or the Client? The Datomic Cloud seems to require the Client but are there a lot of people also using the Peer?#2019-08-0904:33QuestHi @U050VTWMB,
I found an issue that may be related in the onyx-kafka plugin. The modern Datomic Pro (peer lib) includes namespace datomic.client where I don't believe it used to. see https://clojurians.slack.com/archives/C051WKSP3/p1565070325058400 -- you may be able to make use of the :exclusions workaround#2019-08-0904:34QuestI fixed the auto-detection mechanism for onyx-datomic -- perhaps a similar fix is needed in one of the libraries you're consuming. https://clojurians.slack.com/archives/C051WKSP3/p1565111849063700#2019-08-0904:36Questcould be useful to run leim pom && mvn dependency:tree -Dverbose=true, should show you if anything besides datomic-pro is pulling in the client lib dependency.#2019-08-0913:12m0smithmany thanks#2019-08-0822:36m0smithAnother question: datomic.client.api/index-range seems to support a :limit and :offset arguments but it looks like they are ignored. Is that the case or is there a good example of them being used? See https://docs.datomic.com/client-api/datomic.client.api.html#var-index-range#2019-08-0912:54Mark AddlemanTrying to use ions cast/event on a local desktop environment and am getting
Syntax error (IllegalArgumentException) compiling at (src/mtz/server/email/core.clj:46:3).
No implementation of method: :-event of protocol: #'datomic.ion.cast.impl/Cast found for class: nil
What am I doing wrong?#2019-08-0913:07Mark AddlemanOh, the calling code is (cast/event {:msg (str "Starting " daemon-name)})#2019-08-0913:36Joe Lane@UAMEU7QV7 All examples of my cast/event calls are using a string value as my :msg value.
(cast/event {:msg "VerifyAuthChallengeTrigger"
::correct correct?})
#2019-08-0913:37Joe LaneI've never seen one with (str "Starting " daemon-name). Maybe the right thing is to attach the daemon-name as a namespaced keyword and pick a static string as the :msg text?#2019-08-0916:01grzmWhat happens during the BeforeInstall phase of a Datomic Ion deploy?#2019-08-1000:15jplazaWe are testing datomic-cloud and are planning to start using it soon. We have a business SaaS product and currently use a single multi-tenant database. The question is, is there any recommended architecture for multi-tenant apps? Single database, multiple databases?#2019-08-1015:47Mark AddlemanI had success with the following architecture: design a schema for multi-tenancy but place each tenant in a separate database. We also had an admin or account database that served as a catalog of accounts.#2019-08-1015:49Mark AddlemanBy designing the schema for multi-tenants, we were able to more directly handle new business requirements around free-tier accounts (all of those went into a single db) and we anticipated that would be easier to handle requirements around subaccounts#2019-08-1110:30jplazaLet me see if I understand what you are saying. You kept an :account/id attribute for every record that needed that, but instead of using one db you used multiple dbs?#2019-08-1110:36jplazaI was considering using different db for each tenant (account) to be able to get rid of the :account/id in every single record. So I wanted to know if there was some hard limit on the number of dbs you can create in datomic or if it’s not a best practice, etc#2019-08-1114:48Mark AddlemanYes, I kept :account/id and it (almost always) had the same value within the db.#2019-08-1114:50Mark AddlemanI don't believe there is a hard limit on number of dbs. However, Stu once said that an implementation detail kept multiple dbs from being as performant at current transact operations as it theoretically could be. I suggest you contact Datomic support if you are concerned about high throughput concurrent transactions across dbs.#2019-08-1123:00jplazaThanks a lot @UAMEU7QV7 for sharing your thoughts#2019-08-1002:29Sam FerrellBeginner question... using a datalog query, how would I assert the value is non-nil? [?e :my/attr ???]#2019-08-1012:09benoitYou can't have nil values in Datomic so you want to assert that an attribute exists for the entity which you can do with [?e :my/attr].#2019-08-1213:42marshallyou can also use the missing? predicate: https://docs.datomic.com/on-prem/query.html#missing and https://docs.datomic.com/cloud/query/query-data-reference.html#missing#2019-08-1215:05Sam Ferrellthank you both!#2019-08-1220:26mafcocincobeginner question: My company is considering moving to Datomic in the next year or so. Can anyone point me to hard performance numbers? Specifically I'm interested in the volume of transactions the transactor can support. I know this will be dependent on the hardware that it is running on but I'm trying to get some "bag of the napkin" estimates on what kind of volume we typically could push through the transactor.#2019-08-1221:29shaun-mahood@mafcocinco https://www.datomic.com/room-keys-story.html is the main example I used for scale when I did the same thing at my company - but we doing small enough data that it really doesn't matter outside of having a number to point to as far as what is possible.#2019-08-1222:14mafcocincoThanks!#2019-08-1303:52xiongtxI’m not sure why the use of return keys in this example Datomic query isn’t working. It’s straight out of the return maps example: https://docs.datomic.com/on-prem/query.html#return-maps
(d/q '[:find ?artist-name ?release-name
:keys artist release
:where [?release :release/name ?release-name]
[?release :release/artists ?artist]
[?artist :artist/name ?artist-name]]
db)
I get a
2. Unhandled com.google.common.util.concurrent.UncheckedExecutionException
java.lang.IllegalArgumentException: Argument :keys in :find is not a
variable
1. Caused by java.lang.IllegalArgumentException
Argument :keys in :find is not a variable
This is datomic-pro-0.9.5930#2019-08-1413:47marshallAre you using peer or client? What version of client (if that’s what you’re using)
I just tested this exactly as pasted with 0.9.5930 and it works fine for me.#2019-08-1417:54xiongtx[com.datomic/datomic-pro "0.9.5561.50"], which I believe is the peer#2019-08-1417:56xiongtxSeems to work w/ 0.9.5930. Maybe this feature was introduced very recently?#2019-08-1417:58marshallthat’s correct#2019-08-1417:58xiongtx👌#2019-08-1417:58marshallit was added in 0.9.5930 i believe#2019-08-1314:27eoliphanthey @jaret quick note, in the latest cloud rev, you guys fixed tx-range’s result to return a :data key per the api doc, but the Log API discussion in the docs, still it refers to :tx-data https://docs.datomic.com/cloud/time/log.html#2019-08-1317:55jaretgood catch. I’ve fixed the table. should be visible on refresh.#2019-08-1314:33eoliphanthey @mafcocinco it’s probably nearly impossible to get something that would be meaningful for your use case with out mocking a bit of it up. Everything from, tx size, your use of tx functions, etc etc is going to affect any number. There are the more general guidlines, like it’s definitely not for ‘write scale’ apps, raw ingest of clickstream, IoT, etc etc data. But at least for us, it’s more than adequate for our typical OLTP scenarios. Nice thing though is given the ease of modeling, etc, even if you only have a rough idea of your use case, it’s gonna be pretty easy to create a benchmark that will give you some of what you need#2019-08-1314:35eoliphantand if you’re planning to run Cloud, getting the backend setup is just a few clicks in the marketplace#2019-08-1314:36mafcocincoThanks. That is kind of what I was thinking. Going to need to mock something up and see how things look.#2019-08-1314:41eoliphantnp, again, the nice thing i’ve found about the ‘universal info model’ is that it strikes a nice balance between a relational schema, and the wild west of something like MongoDB. You can define the attributes you think you need, then you have a pretty large degree of freedom to mess around with creating entities that you think are representative for your testing. also, make sure you at least skim through the best practices section of the docs, as there are a few things in there that could affect your assessment if you’re not aware of them#2019-08-1322:07tylerIs there any way to tap into aws codedeploy hooks for ions deployments? Would like to run our own checks to rollback on failure.#2019-08-1322:33Joe Lane@tyler We made a codebuild script with different phases, one of which deploys ions, as well as other stuff.#2019-08-1322:34Joe LaneWe did that because we couldn't find a nice codedeploy hook for what you're describing.#2019-08-1322:35tylerInteresting. Will look into that approach, thanks.#2019-08-1404:15johnjelinektryna set up ions#2019-08-1404:15johnjelinektryna set up ions#2019-08-1404:15johnjelineklooks like something error'd:
clojure -A:dev -m datomic.ion.dev '{:op :deploy-status :execution-arn arn:aws:states:us-east-2:101416954809:execution:datomic-dev-Compute-784GREJAJTLX:dev-Compute-784GREJAJTLX-bd6deb15afeee59dd2dd16943cf3c0313f534c34-1565755664290}'
{:deploy-status "FAILED", :code-deploy-status "FAILED"}
#2019-08-1404:16johnjelinek{...
"status": "Failed",
"errorInformation": {
"code": "HEALTH_CONSTRAINTS",
"message": "The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems."
}
}
#2019-08-1404:16johnjelinekany idea how to troubleshoot?#2019-08-1404:33johnjelinekall my CloudWatch Logs look like:
START RequestId: a012efad-3c01-4a75-8b08-615978c5f177 Version: $LATEST
2019-08-14T04:11:53.124Z a012efad-3c01-4a75-8b08-615978c5f177 { event:
{ codeDeploy: { deployment: [Object] },
lambda: { cI: 4, c: [Array], uI: -1, u: [], dI: -1, d: [], common: [Object] } } }
END RequestId: a012efad-3c01-4a75-8b08-615978c5f177
#2019-08-1404:38johnjelinekmade an issue for this: https://github.com/Datomic/ion-starter/issues/5#2019-08-1413:42marshallDid you examine your Datomic system logs? https://docs.datomic.com/cloud/operation/monitoring.html#searching-cloudwatch-logs#2019-08-1413:42marshallyou need to determine why the instances are not starting up - usually caused by an error in the ion code that is preventing it from loading#2019-08-1423:21johnjelinekI was using the starter ion code#2019-08-1423:31marshallSearch your cloudwatch logs for the system you deployed to#2019-08-1423:31marshallSee if any errors or exceptions show up in there#2019-08-1501:28johnjelinekI posted the messages from cloudwatch logs above#2019-08-1501:35marshallThe logs from your datomic stack. Not codedeploy#2019-08-1501:36marshallTake a look at the link to the docs i posted. It includes details about finding the datomic stack logs#2019-08-1408:48mkvlrwe’ve been running with a shared valcache for about a week in production now. When deploying, valcache is briefly accessed by two instances which we heard from @jaret is not offically supported but should work. We’re now seeing EOFException pop up originating in datomic.index/reify every now and then. Is this a setup you plan to support or should we stop running like this?#2019-08-1408:53mkvlrthis is the stacktrace if it helps. Also happy to report this elsewhere if that’s better.#2019-08-1408:55mkvlrAnother issue we’ve seen is this stackoverflow error. This occured only once and due to recursion we don’t have the full stacktrace but there’s datomic.query/fn in the stacktrace. We’re thinking to increase the number of frames printed in our bug tracker. Any other advise on how to track this down? Thanks!#2019-08-1413:18jaret@mkvlr we do not currently have plans to specifically support valcache being accessed by two instances. I theorized that it should work based on seeing multiple separate services share valcache, but it appears to affect indexing with that EOFException.
Re: your other error. I’d be happy to look at your Datomic logs to see the error in query. If you’d like to open a case with support (<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>) we can use that to share files and we won’t lose our communication to slack archiving. In general, I think it would be useful to look at the entire datomic log for both errors.#2019-08-1414:52mkvlr@jaret thanks. Talking to my colleagues we believe the EOFException did occur before we were running with two nodes. I guess we’ll reconfigure our nodes to use different valcaches and let you know if it does happen again. And will get in touch with support for the query error, thanks again!#2019-08-1414:53jaretOh interesting. It might be worth it to have us investigate the EOFException as well via the support portal.#2019-08-1414:53jaretEspecially if you’ve kept logs from before and after the switch to sharing valcache.#2019-08-1500:27tylerShould a fresh connection be retrieved for every request with datomic cloud or should you cache the connection?#2019-08-1500:28marshallhttps://docs.datomic.com/cloud/client/client-api.html#connection#2019-08-1500:31tylerHm that’s what I thought. Seeing something that looks like a memory leak though when retrieving a db connection and db value every request. Will dig into it more.#2019-08-1501:10kennyThe docs for as-of (https://docs.datomic.com/client-api/datomic.client.api.html#var-as-of) say:
> Returns the value of the database as of some time-point.
What is "time-point"? A Date? Epoch millis? Datomic's t?#2019-08-1501:22kennyI'm assuming it's similar to Datomic peer: https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/as-of
> t can be a transaction number, transaction ID, or Date.#2019-08-1506:40Brian AbbottIs anyone from Datomic Support currently available? We are having a critical outtage at the moment.#2019-08-1507:12Brian AbbottPlease see Cognitect Support Ticket 2327 🙂 Thank you!#2019-08-1514:29jaretHi Brian, I responded on the ticket a few hours ago.#2019-08-1523:07Brian AbbottThank you again so much Jaret!#2019-08-1508:03mkvlr@jaret my colleague @kommen submitted the issue to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>. Got a request to create our zendesk accounts but got running into a XSS error when trying to set my password on …#2019-08-1514:30jaretIt looks like there were eventually able to create the ticket. Did they resolve the error? Was it transient or are they still having issues registering?#2019-08-1514:32mkvlrjust tried again, still doesn’t let me assign a password#2019-08-1514:32mkvlrBlocked a frame with origin "" from accessing a frame with origin "". Protocols, domains, and ports must match.
#2019-08-1514:34jaretWhat browser are you using and do you use adblock (can you try without if so to confirm)?#2019-08-1514:42grzmI have a transaction function that may return empty tx-data if certain criteria aren't met. I'd like to take a particular action only if there were datoms transacted. My understanding is that every transaction that does not fail will have at least a single datom in the tx-data of the result for the txInstant of the transaction. From what I've observed, I believe I can test whether the count of the datoms in the tx-data of the result is greater than one to determine if any additional datoms were asserted or retracted. Is this something I can rely on? Anyone have a better approach?#2019-08-1515:01mgrbyteAssuming 1 datom transacted, you could also check if :db/txInstant is the only attr asserted in the "transaction entity". If you're not using reified transactions then this additional check is probably moot.#2019-08-1516:02grzmYeah, I'd like to save doing an additional database lookup: I guess I could assume that the :db/id of the :db/txInstant attribute is unchanging and not do a lookup, but inspecting the datom attribute db/id in a tx-data that includes only a single datom seems redundant if I know that each non-anomalous transaction result will have at minimum a single :db/txInstant attribute. Thanks for thinking through this with me.#2019-08-1517:41tylerIs there a recommended way to hook up a java agent to a query group? Running into a strange memory issue with ions and we are having a hard time debugging with the datomic memory monitor on the provided dashboard.#2019-08-1520:08Laverne SchrockWe have one Datomic Cloud deployment running version 477-8741, and another deployment running 480-8772. On the older version we are able to make assertions about the entity with id 0, but in the newer version we cannot, due to the following error :
{:cognitect.anomalies/category :cognitect.anomalies/incorrect,
:cognitect.anomalies/message "Boot datoms cannot be altered: <actual datom omitted, let me know if you need it>",
:db/error :db.error/datom-cannot-be-altered}
This seems like reasonable behavior, but it doesn't seem to be documented in the changelog.
Does anyone know if this is an expected change?#2019-08-1607:46danierouxOn https://docs.datomic.com/cloud/ions/ions-monitoring.html I see that “You can send all alert, event, and dev output to registered taps by calling initialize-redirect with a target of :tap”
With com.datomic/ion {:mvn/version "0.9.34"} and (datomic.ion.cast/initialize-redirect :tap) I get:
initialize-redirect expects :stdout, :stderr, or a filename
What am I missing?#2019-08-1710:46danierouxNow works with com.datomic/ion {:mvn/version "0.9.35"}, thanks @U1QJACBUM#2019-08-1611:18dmarjenburghA question regarding the EULA. The passage that reads as follows:
> Upon termination of this EULA for any reason, you will erase or destroy all copies of the Datomic Cloud Software, or part thereof, in your possession, if any. Any use of the Datomic Cloud Software after termination is unlawful.
What exactly is mean by the ‘Datomic Cloud Software’? The publicly available`com.datomic/client-cloud` library? The AWS resources running in your account? Or the code running on the marketplace image?
I need to answer these questions to the ‘vendor management’ department. We really want to employ Datomic Cloud in the organization.#2019-08-1622:00xiongtxFrom the datomic documentation: https://docs.datomic.com/on-prem/storage.html#connecting-to-transactor
> If the transactor cannot bind to its publicly reachable IP address (e.g. the transactor is on a VM that doesn’t own or can’t see its external address), you will need to provide a value for alt-host on the transactor with the publicly reachable IP address in addition to the host property.
We’ve got the transactor deployed in a container behind a load balancer. Should the alt-host be the load balancer’s URL? Where do we provide the LB’s port?#2019-08-1700:15favilaAlt-host should be the balancer’s external hostname#2019-08-1700:15favilaPorts cannot differ #2019-08-1700:16favilaI hope you are not actually splitting traffic? Datomic wants to be able to address transactor individually #2019-08-1700:48johnnyillinoisThe traffic is not split #2019-08-1700:48johnnyillinoisAny reason the ports are not allowed to differ#2019-08-1700:49johnnyillinoisOr do you know how big of a change it would be to allow the ports to differ?#2019-08-1700:49johnnyillinoisThe LB works just as a proxy #2019-08-1702:10favilaThe ports can’t differ simply because there’s no option to do so#2019-08-1821:31Chris SwansonHey does anyone use vase with datomic cloud? It wasn't obvious to me how to connect them; the datomic uri key vase uses is different than the datomic cloud client library which needs extra AWS info.#2019-08-1914:35Joe Lane@chrisjswanson I recently succeeded at this but ran into several small issues with it. Granted, i'm trying to deploy via ions and that was where the issues were.#2019-08-1915:06Chris Swanson@lanejo01 if you'd care to share details, I'm quite curious. Did you end up having to write a custom intercepter to add the datomic connection to the chain? Or modify the vase code to let it handle datomic cloud connection Uris?#2019-08-1915:08Joe LaneBoth#2019-08-1915:08Joe LaneI can share more later#2019-08-1915:09Chris SwansonThanks man, good to have that insight, I'd appreciate it#2019-08-1915:09Joe LaneUsing Ions or not?#2019-08-1915:11Chris SwansonYes but probably not to deploy vase, just custom query functions. Vase would likely go on k8s or lambda directly. But I'm still exploring, so I'm also really curious how it ended up working for you on ions#2019-08-1915:11Chris SwansonIf I could just deploy vase straight as an ion that would be pretty nice#2019-08-2007:29Ivar RefsdalWhen I connect to a database with an url like "datomic:", Datomic will by default log this password in plaintext. Is it possible to avoid this? Would it help using a map syntax for the connect call?#2019-08-2009:49Lone RangerIs this for corporate security policy? Are you running on a linux server? Does the message appear at initialization?
Not a perfect answer but if the answer to the above questions are "yes" you could always use awk to filter out the password logging line ...#2019-08-2012:05Ivar RefsdalThe message appears when doing (d/connect) (default log-level is info), so yes that is during initialization.
I've "solved" it using a timbre log middleware. Nevertheless I think it's bad practice by Datomic to log passwords in plaintext by default.#2019-08-2013:40marshallthe printConnectionInfo configuration https://docs.datomic.com/on-prem/system-properties.html#transactor-properties
will prevent Datomic from logging storage password on startup#2019-08-2108:12Ivar RefsdalThanks @U05120CBV
My problem was that the peer logged the password, not the transactor.
Or will putting this property also affect the peer?#2019-08-2113:25marshallAh, i misunderstood. I don’t believe that will affect the peer#2019-08-2113:25marshallThat seems like something that should be registered as an improvement request
Can you access the feature request portal (link in top menu bar of http://my.datomic.com dashboard)? If so, that would be a great one to add#2019-08-2015:31grzmI have a Datomic Ion which is called by a scheduled Cloudwatch event every 10 minutes. 8-9 times out of 10 I get the following error in the logs for the lambda:
{:cognitect.anomalies/category :cognitect.anomalies/unavailable, :cognitect.anomalies/message "Connection reset by peer", :clojio/throwable :java.io.IOException, :clojio/socket-error :receive-header, :clojio/at 1566310809474, :clojio/remote "[ip redacted]", :clojio/queue :queued-handler, :datomic.ion.lambda.handler/retries 0}
The function called by the lambda seems to be executing fine: I cast an ion event during execution and can see that in the corresponding Datomic compute logs. Viewing the CloudWatch metrics for the lambda via the console also don't show these errors, so I think the ion is actually working fine. What do these anomalies indicate?#2019-08-2015:39m0smithIs there a Datomic Console deployed with Datomic Ions/Cloud?#2019-08-2015:46grzmNope. REBL can be very useful, particularly it's ability to leverage the nav metadata that decorate Datomic results.#2019-08-2118:02m0smithHow do I get REBL running with ions?#2019-08-2118:04grzmHave you used REBL before? If not, take a look here: http://rebl.cognitect.com#2019-08-2118:06grzmIf you've got REBL up and running, have your datomic-socks-proxy running, and Datomic query results are navigable using REBL. It's been years since I used the Datomic console, so I can't provide a great comparison, but I've found REBL to be really useful, in general, and when coupled with Datomic.#2019-08-2118:06grzmThere's really nothing special about ions in particular wrt REBL.#2019-08-2015:40Joe Lane@grzm I think the aws lambda may have timed out (jvm startup) but the execution was still performed. What happens if you decrease the CW event to every 2 minutes?#2019-08-2016:07grzmCheers! That seems to be it. Would be helpful if the error were more explicit. The total time reported in the lambda logs is well under the lambda timeout, so the issue wasn't immediately apparent to me.#2019-08-2016:08grzmGiven that I don't want to actually run the code at such a low frequency, I'm thinking of setting up a second CloudWatch rule to call the same ion, but switch on the input, no-oping the rule that's keeping it warm. Does this seem sane?#2019-08-2016:19Joe LaneCan you expose it through http direct? CW events can issue http calls#2019-08-2016:19grzmOh, that's an idea.#2019-08-2112:50Lone RangerI thought I remembered somewhere in the documentation that you can add a docstring to a transaction. Did I hallucinate that?#2019-08-2113:56marshallSure. You can put any attr (including :db/doc ) on a transaction entity#2019-08-2113:58marshall{:db/id "datomic.tx" :db/doc "my transaction doc""}#2019-08-2116:04Lone Rangercan you do that in list form too or just map form?#2019-08-2116:25marshallsure#2019-08-2116:25marshall[:db/add "datomic.tx" :db/doc "my transaction data"]
#2019-08-2200:22Lone Rangeraha, ty sir!!!#2019-08-2113:13ivanaHello! get-else works with cardinality-one attributes only? how can I chek if my cardinality-many attribute has at least one value or not, without disappearing rows with this attr is not setted?#2019-08-2113:31ivanatrick with (or [?o :order/my-cardinality-many-attribute ?x] [?o :db/id ?x])works, but it made multiple lines on many values in attribute#2019-08-2114:13marshall@ivana you could use the missing? predicate#2019-08-2114:14marshallAmd a similar or trick#2019-08-2114:15ivanayes, but how can I get all missing and non-missing attr rows just with a chek of missing and without a multiplication lines?#2019-08-2114:21benoit@ivana It's not clear what query you're trying to write. Your or clause above will return all the entities in your database because of [?o :db/id ?x], is it really what you want?#2019-08-2114:23ivanaI want simply to check (!) if this entity have at least one value in its card-many attr or not - with the same entity lines as they are.#2019-08-2114:24ivanaF.e. I have 2 entities, one with 10 values in many attr, and one with 0. I want 2 rows with bollean flag#2019-08-2114:27ivanaNot 11 lines, not 1 line. Just 2 - as a real amount of my entity#2019-08-2114:30benoitThere might be a simpler approach but something like this could work:
[:find ?o ?has-many-attr
:where
[?o :other/attr]
(or (and [?o :order/my-cardinality-many-attribute]
[(ground true) ?has-many-attr])
(and [(missing? $ ?o :order/my-cardinality-many-attribute)]
[(ground false) ?has-many-attr]))]
#2019-08-2114:31ivanathanks, I'l try it#2019-08-2114:33ivanafor the first impression it is exactly what I need, thanks alot! I'l play with it in a real queries#2019-08-2115:20deadghostDatomic does not accept nil attribute values. If I have an entity
{:foo/id 101
:foo/code :FOO
:foo/type "bar"}
and I want to update it like so:
(d/transact conn [{:foo/id 101
:foo/code :LOL
:foo/type nil}])
it will throw an Exception.
If I exclude the nil attribute:
(d/transact conn [{:foo/id 101
:foo/code :LOL}])
:foo/type will remain "bar".
:foo/type seems like it needs to be explicitly retracted. I'm currently using the method detailed in https://matthewboston.com/blog/setting-null-values-in-datomic/ to do nil updates. It's a red flag that I need to handroll something to do this type of update and it suggests I am not doing things in the correct way. Another approach would be a full retract and insert but I get the feeling there are unexpected behaviors I have not thought about with that approach. How are you all approaching this?#2019-08-2115:30ghadiYou do not need to reassert the whole entity - just retract the part that is no longer true#2019-08-2115:30ghadiWithout even reading that article it is probably misconceived#2019-08-2117:00eoliphantyeah it I think misses the point. entities are sets of arbitrarily related attributes, not tables, this takes some getting used to but it’s far more powerful once you get the hang of it. And the example of “So what if we need to set a value to null?“, isn’t really.
This
(datomic/q '[:find ?id ?firstname ?lastname
:in $
:where
[?id :user/firstname ?firstname]
[?id :user/lastname ?lastname]]
(datomic/db conn))
should be more like this:
(datomic/q '[:find ?id (pull ?e [:user/firstname :user/lastname])
:in $
:where
[?id :user/firstname]
; - or perhaps -
[?id :user/id]
]
(datomic/db conn))
No need to ‘simulate null’, and you keep the clean semantics of simply retracting :user/lastname. Also ‘matching for values’ is less efficient (though probably not a big deal in this trivial case), as the engine has to do work to match each ‘clause’. to the extent possible, let your where do the selecting, then just pull what you need in terms of values#2019-08-2117:16eoliphantalso, the edit-or-create-user-txnexample seems to conflate empty strings with null/nil. Now that may be desired behavior in certain circumstances, but “” is not nil/null/“not present”, even in a traditional say relational db. Also, figuring out retracts is pretty trivial. a set/difference on the keys of the incoming update and an existing entity will give you that directly. “” as nil, if necessary can be tacked on with filter prior to the diff#2019-08-2122:54m0smithCalling cast/even from wihtin CIDE results in a StackOverflowError#2019-08-2122:55m0smithExecution error (StackOverflowError) at cider.nrepl.middleware.out/print-stream$fn (out.clj:93). Has anyone else seen this?#2019-08-2123:02andrew.sinclairIs there a function in the Peer api that allows a user to programmatically determine the transactor’s port?#2019-08-2123:02andrew.sinclairWe are using the map uri, with a cassandra callback, so port is not present in the uri.#2019-08-2123:08m0smith(ns bug-demo
(:require [datomic.ion.cast :as cast]))
(cast/initialize-redirect "/tmp/hamster")
(cast/event {:msg "ShouldNotCauseAStackOverflowErrorInCider"})#2019-08-2222:37telekidAre homogenous tuples limited to 8 values? https://docs.datomic.com/cloud/schema/schema-reference.html#tuples#2019-08-2222:37telekidseems so, but just wanted to confirm#2019-08-2222:38kennyCan you rename a Datomic Cloud db?#2019-08-2306:25jaihindhreddyI want to extend codeq to analyze Python (2) code. What would that entail?#2019-08-2310:45jaihindhreddyWhere can I get the datomic client library?#2019-08-2312:46kirill.salykinThere are seems no latest datomic-free releases?
https://clojars.org/com.datomic/datomic-free
0.9.5697 vs 0.9.5951 pro#2019-08-2312:51akiel@kirill.salykin Yes this is a big problem and I really don’t understand why. I already wrote directly to Cognitect but got no answer. Maybe we can write an email together?#2019-08-2314:40kirill.salykinhttps://my.datomic.com/downloads/free#2019-08-2314:40kirill.salykinyou can download latest here#2019-08-2314:40kirill.salykindatomic-free-0.9.5703.21.jar
seems like a peer library#2019-08-2314:47kirill.salykinbut still, it is very outdated#2019-08-2314:47akielI currently try to run it.#2019-08-2315:17akielIt somehow runs, but than I don’t understand that version number and it’s not available on maven central.#2019-08-2315:17akielI wrote a new mail at the Datomic support.#2019-08-2315:23kirill.salykinI think they won’t respond :(#2019-08-2315:24kirill.salykinYou can use starter edition btw#2019-08-2315:24kirill.salykinIt is free and supports updates for 1 year#2019-08-2315:25kirill.salykinPro starter I think it is called#2019-08-2315:25akielI’m a paying customer of Datomic. I’ll find a way that they respond.#2019-08-2315:25kirill.salykinI see )#2019-08-2616:34akielJust for this thread. @U1QJACBUM Answered this question in another Thread with:
> We are considering different options for Datomic Free, and would love to hear more about your use cases. You can share your use cases with me via <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>.#2019-08-2618:44kirill.salykinthanks for keeping me posted!#2019-08-2312:52kirill.salykinlast release from 2018…
> Maybe we can write an email together?
lets try, but I doubt it will help a lot#2019-08-2312:59akielMaybe someone else also likes to participate?#2019-08-2313:37joshkhis there a way to retrieve from a datomic cloud client the configuration that was used to create it?*#2019-08-2315:44Lone RangerAs much as I would love it, I don't really see a whole lot of incentive for them to update the free version 😕#2019-08-2315:44Lone RangerThey need to make money somehow#2019-08-2323:05George CiobanuHi! I am trying to model a tree of components (Page, button group, tab, buttons etc) as a hierarchy of maps (each component has a bunch of attributes). It's very similar to the DOM in that if the user deleted a tab group that has buttons as children, the buttons need to be deleted as well. Of course, the user can also move a subtree of components to a different nodes etc. It's a standard GUI editor.
I think the best way to model this is to make each node a component so that if any node is deleted its children are deleted as well. Does that make sense? Are there any subtleties I'm missing, and should I manage deletion and the whole hierarchy by hand using plain ref types?
Any thoughts much appreciated. A link to an article is also fine (I tried to RTFM but I never saw anyone use components for hierarchies and am wondering why).#2019-08-2401:01Lone RangerIs this secretly a datascript question? Happy to help either way but it would be good to know which direction you're going with it#2019-08-2401:02Lone Rangerspecifically, whether you're doing in clojurescript or clojure is kind of important#2019-08-2401:08Lone Rangermy thoughts are effectively you're going to need a DSL layer to interpret the meaning of the maps as they relate to the components, but you probably already knew that. If you're working in Clojure, you can't have dynamic (runtime) components since you probably don't want to ruin your DB by creating schema on the fly.
I would strongly consider checking the https://en.wikipedia.org/wiki/Entity_component_system for some inspiration on how you can create "dynamic" behavior from predefined schema used the entity component system.#2019-08-2401:15Lone RangerRegarding the "deletion", the good news is that you don't really have to "delete" anything, you simply assert what the new structure is.#2019-08-2401:22Lone RangerSo the challenge for you will be structuring recursive queries. There is a recursive pull syntax available, but you'd have to carefully structure you schema.
So the "illusion" of a recursive delete would be accomplished by doing a retraction near the root of your graph, aka, asserting an empty membership of children -- this would then break the recursion of your query#2019-08-2401:23Lone RangerAnyway that's my two cents, best of luck to you!#2019-08-2403:00George CiobanuHi Goomba! Thank you so much for your help#2019-08-2403:00George CiobanuI'll process and reply once I get what you are saying#2019-08-2403:02George CiobanuNot secretly a Datasceipt question, I actually intend to store this datasctructure in Datomic#2019-08-2403:03George CiobanuIt can be either clj or cljs since both my backend and frontend are Clojure(script)#2019-08-2403:04George CiobanuRegarding the DSL layer I don't think I need it, in the sense that the number of component types is fixed and each has a unique schema that's mostly immutable (I might add properties over time but that's it)#2019-08-2403:05George CiobanuSo each map will map to one component#2019-08-2403:05George CiobanuAnd anything in it's :children key will be subcomponents (in the GUI sense)#2019-08-2403:06George CiobanuRe deletion that makes sense#2019-08-2403:07George CiobanuAnd while I haven't fully understood recursive queries I'm not concerned since I saw several examples and they seem to make sense#2019-08-2417:35George CiobanuSorry for double posting I just wanted to see if anyone has thoughts on this#2019-08-2417:35George CiobanuHi! I am trying to model a tree of components (Page, button group, tab, buttons etc) as a hierarchy of maps (each component has a bunch of attributes). It's very similar to the DOM in that if the user deleted a tab group that has buttons as children, the buttons need to be deleted as well. Of course, the user can also move a subtree of components to a different nodes etc. It's a standard GUI editor.
I think the best way to model this is to make each node a component so that if any node is deleted its children are deleted as well. Does that make sense? Are there any subtleties I'm missing, and should I manage deletion and the whole hierarchy by hand using plain ref types?
Any thoughts much appreciated. A link to an article is also fine (I tried to RTFM but I never saw anyone use components for hierarchies and am wondering why).#2019-08-2418:57favilaThis might be a fit, but generally IsComponent is used to reference an entity which doesn’t have an identity at all apart from its parent#2019-08-2419:00favilaDataomic assumes (but does not enforce) that if there’s an assertion [e iscomponentattr component-e], this is the only datom in the entire db with Component-e in the v slot#2019-08-2419:02favilaAt least the d/entity api will also make the reverse-ref of an attr not-a-collection for this reason (even if there is in fact more than one entity pointing to it!)#2019-08-2419:03favilaYou should be careful with “reparenting” a node via an IsComponent attr because you can end up violating this constraint by accident #2019-08-2419:04favilaYou will need a transaction function#2019-08-2419:54George CiobanuThank you Favila much appreciated. I couldn't find documentation about the assumption you mention, any chance you have a handy link?#2019-08-2420:47George CiobanuSpecifically I'm thinking of this: Components allow you to create substantial trees of data with nested maps, and then treat the entire tree as a single unit for lifecycle management (particularly retraction). All nested items remain visible as first-class targets for query, so the shape of your data at transaction time does not dictate the shape of your queries. This is a key value proposition of Datomic when compared to row, column, or document stores.#2019-08-2420:47George Ciobanu"all nested items remain visible..."#2019-08-2515:06favilaIt’s not explicitly stated that way anywhere to my knowledge but it’s an inevitable consequence of the special behavior IsComponent attrs get: 1) retractEntity deletes them even if other entities reference them ; reverse-ref in entity and pull doesn’t show all reverse refs only the first one; pull * and d/touch eagerly follow and load the value of those references#2019-08-2419:42Mark AddlemanTrying to deploy an ion to a new Datomic Cloud instance in a new AWS account. The deploy step is failing and Cloudwatch logs reports
{
"errorMessage": "No Deployment Group found for name: mbsolo-Compute-KLOG23BUPMGI",
"errorType": "DeploymentGroupDoesNotExistException",
"stackTrace": [
"Request.extractError (/var/runtime/node_modules/aws-sdk/lib/protocol/json.js:51:27)",
"Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:106:20)",
"Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:78:10)",
"Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:683:14)",
"Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:22:10)",
"AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)",
"/var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10",
"Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:38:9)",
"Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:685:12)",
"Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:116:18)"
]
}
Weirdly, when I look in the AWS CodeDeploy Console, I see mbsolo-Compute-KLOG23BUPMGI listed in the deployment group.#2019-08-2419:42Mark AddlemanAny thoughts?#2019-08-2419:42Mark AddlemanAny thoughts?#2019-08-2419:49Mark AddlemanJust noticed that the push operation reported an empty :deploy-groups list.
{:rev "5c79aede20d112c7ebbdc8a9a65514451a2a6f19",
:deploy-groups (),
:dependency-conflicts
{:deps
{com.cognitect/transit-java #:mvn{:version "0.8.311"},
org.clojure/clojure #:mvn{:version "1.10.0"},
commons-codec/commons-codec #:mvn{:version "1.10"},
org.clojure/tools.analyzer.jvm #:mvn{:version "0.7.0"},
com.fasterxml.jackson.core/jackson-core #:mvn{:version "2.9.8"},
com.google.guava/guava #:mvn{:version "18.0"},
org.msgpack/msgpack #:mvn{:version "0.6.10"},
com.cognitect/transit-clj #:mvn{:version "0.8.285"},
com.cognitect/s3-creds #:mvn{:version "0.1.23"},
org.clojure/tools.reader #:mvn{:version "1.0.0-beta4"},
org.clojure/test.check #:mvn{:version "0.9.0"},
com.amazonaws/aws-java-sdk-kms #:mvn{:version "1.11.479"},
org.clojure/core.async #:mvn{:version "0.3.442"},
com.amazonaws/aws-java-sdk-s3 #:mvn{:version "1.11.479"}},
:doc
"The :push operation overrode these dependencies to match versions already running in Datomic Cloud. To test locally, add these explicit deps to your deps.edn."},
:deploy-command
"clojure -Adev -m datomic.ion.dev '{:op :deploy, :group <group>, :rev \"5c79aede20d112c7ebbdc8a9a65514451a2a6f19\"}'",
:doc
"To deploy, issue the :deploy-command, replacing <group> with a group from :deploy-groups"}#2019-08-2419:58Mark AddlemanI found this: https://forum.datomic.com/t/help-deploying-ion-example/717/7#2019-08-2419:58Mark AddlemanFrom the thread, it looks like the solution is to create a new AWS account but it would be helpful to open an AWS support ticket.#2019-08-2419:59Mark Addleman@U1QJACBUM Can you confirm this is the same problem? If so, I could use some guidance on what to put in the support ticket#2019-10-2115:51Joe LaneHa thanks for answering my current and then immediate next question. See you guys at the Conj!#2019-10-2117:02BrianWe have a Datomic db running in our closet that we want to host in AWS. We already have a stack set up and some db's already running in AWS. We have extracted the database from our closet server and are now looking to restore it in AWS. Can someone point me in the right direction for how we can do that? https://docs.datomic.com/on-prem/backup.html looks promising but that is for on-prem and I'm not sure that's what I need#2019-10-2117:12favilaare you moving from on-prem to cloud datomic? Not merely on-prem in closet to on-prem on aws. There’s no supported migration from on-prem to cloud systems https://docs.datomic.com/on-prem/moving-to-cloud.html#2019-10-2117:21BrianDoes "no supported migration" means there is no easy-button or does it mean that it is not possible?#2019-10-2117:27ghadiyou can easily move your on-prem to AWS running on-prem#2019-10-2117:28ghadiby... running on-prem in your AWS EC2 instance#2019-10-2117:28ghadi(on-prem to Datomic Cloud is a different thing as @U09R86PA4 mentions)#2019-10-2117:29favilait’s possible if you do it yourself#2019-10-2117:29favilai.e. some variation of read each tx from the old db, make a new tx for it, transact it into the new cloud db#2019-10-2117:30favilathere are feature and other differences between on-prem and cloud that you will have to account for#2019-10-2117:30favilabut there’s no easy backup-and-restore#2019-10-2123:03Msr TimHi I am setting up a new datomic system in a VPC that needs to be accessed by kubernetes container running in another vpc#2019-10-2123:03Msr Timi followed the instructions here https://docs.datomic.com/cloud/operation/client-applications.html#vpc-peering#2019-10-2123:03Msr Tim2019-10-21 23:02:05,824 [main] ERROR app.core - {:what :uncaught-exception, :exception #error {
:cause :server-type must be :cloud, :peer-server, or :local
:data {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message :server-type must be :cloud, :peer-server, or :local}
:via
[{:type java.lang.RuntimeException
:message could not start [#'spot-app.db.core/conn] due to
:at [mount.core$up$fn__385 invoke core.cljc 80]}
{:type clojure.lang.ExceptionInfo
:message :server-type must be :cloud, :peer-server, or :local
:data {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message :server-type must be :cloud, :peer-server, or :local}
:at [datomic.client.api.impl$incorrect invokeStatic impl.clj 42]}]
:trace
[[datomic.client.api.impl$incorrect invokeStatic impl.clj 42]
[datomic.client.api.impl$incorrect invoke impl.clj 40]
#2019-10-2123:04Msr Timits failing at this line (def conn (d/connect client {:db-name "movies"})) #2019-10-2212:10favilaThe cause of your error is something in client: a missing or bad :server-type.#2019-10-2212:12favilahow do you construct your client? that is where the problem lies#2019-10-2212:32Msr Timhttps://docs.datomic.com/cloud/getting-started/connecting.html#2019-10-2212:32Msr Timlike its described there#2019-10-2212:32Msr Tim(require '[datomic.client.api :as d])
(def cfg {:server-type :ion
:region "<your AWS Region>" ;; e.g. us-east-1
:system "<system-name>"
:creds-profile "<your_aws_profile_if_not_using_the_default>"
:endpoint ".<system-name>.<region>."
:proxy-port 8182})
(def client (d/client cfg))
#2019-10-2212:33Msr Timit works perfectly locally on my machine via the bastion#2019-10-2212:33Msr Timbut gives me that error on k8s#2019-10-2212:33Msr Timso it cant be the configuration#2019-10-2212:33favilak8s is not an ion#2019-10-2212:33Msr Timoooh#2019-10-2212:33favilayou are connecting from “outside” the ion cluster#2019-10-2212:34favilaso you need to use a different peer connection type#2019-10-2212:34Msr Timooh#2019-10-2212:34Msr Timgotcha#2019-10-2212:34favilait works locally because there’s a local ion dev environment#2019-10-2212:34Msr Timunderstood#2019-10-2212:35favilayou likely need :cloud#2019-10-2212:36Msr Timi tried it with :cloud but I still get the same error#2019-10-2212:36favilathe same exact error?#2019-10-2212:37Msr Timlet me try one more time now#2019-10-2123:05Msr Timshould i assume that the error message is wrong?#2019-10-2123:06Msr Timsince I am not using async api as suggested here https://docs.datomic.com/cloud/troubleshooting.html#async-ion#2019-10-2200:31cjsauerIs it unwise to use :db/noHistory on a :db.unique/identity attribute that is meant to identify ephemeral, high-churn entities?#2019-10-2203:48ackerleytngI'm passing as-of a Date from clj-time's to-date but I'm getting a casting error, something to do with datomic idb. Has anyone had this issue before?
class java.util.Date cannot be cast to class datomic.db.IDb (java.util.Date
is in module java.base of loader 'bootstrap'; datomic.db.IDb is in unnamed
module of loader 'app')
#2019-10-2212:10benoitAre you passing the date as the first argument instead of the second?#2019-10-2301:57ackerleytngNope, here's my code
(let [time (tc/to-date
(t/from-time-zone (t/date-time 2019 10 22 16 50 0)
(t/time-zone-for-offset +8)))
db-then (d/as-of (d/db conn) time)]
(d/q '[:find ?doc
:where [_ :db/doc ?doc]]
db-then))
#2019-10-2302:01ackerleytngand db-then is a db...
(let [time (tc/to-date
(t/from-time-zone (t/date-time 2019 10 22 16 50 0)
(t/time-zone-for-offset +8)))
db-then (d/as-of (d/db conn) time)]
(type db-then)) => datomic.client.impl.shared.Db
#2019-10-2214:34arnaud_bosThis made me laugh very hard. Thought it'd be of interest to the immutability fans out there 😂
https://github.com/gfredericks/quinedb#2019-10-2214:35arnaud_bosFound via https://twitter.com/andy_pavlo/status/1186636813458432000#2019-10-2214:37arnaud_bosDon't forget the FAQ section.#2019-10-2214:41arnaud_bosJust found out the author is in this slack...#2019-10-2214:53dakraHi. I want to "play around" with datomic cloud but I'm having completing the tutorial. The datomic cloud setup on AWS all seemed to work fine. I have datomic-access running and I can do the curl -X socks call from the tutorial and I get a successful response with s3-auth-path. But when I try (d/create-database client {:db-name "testion"}) I get:
Unable to find keyfile at
. Make
sure that your endpoint and db-name are correct.
#2019-10-2215:14Msr Timyou need to permissions to access those s3 files#2019-10-2215:15Msr Timdid you see that file in s3 ?#2019-10-2215:17dakraI'm new to AWS. I made an IAM user and gave him AmazonS3FullAccess permissions. Is that enough? How can I test access to those files?#2019-10-2215:38dakraI'm now the root user and still same problem. I'll try and delete and re-create the cloud-formation. maybe this helps#2019-10-2215:20dmarjenburghWe recently upgraded from the Solo to the Production topology. When using a lambda proxy, the request contains a requestContext with authorizer claims parsed from the oauth2 token in the request. When using the VPC Proxy, this information is missing. Is there a way to retrieve it?#2019-10-2217:00Joe Lane@dmarjenburgh We ended up parsing the token (jwt in our case) in a pedestal interceptor to work around this missing piece in http-direct.#2019-10-2217:01dmarjenburghYeah, figured that would be the thing to do. Thanks#2019-10-2217:02Joe LaneIf you need to see the contents of the request I suggest casting the request object and looking at it in cloudwatch (seemed to be the only way to debug it)#2019-10-2217:33dmarjenburghDid you use a library for parsing the jwt, or just base64decode it yourself?#2019-10-2218:45Joe LaneDecode it myself#2019-10-2220:04cjsauerAnyone tried this? Is there a foot-gun lurking here?#2019-10-2220:06cjsauerI’m thinking of also using :db/isComponent for all attributes on these ephemeral entities so that I can retract them in one fell swoop.#2019-10-2220:53favilaWhat does that gain over retractEntity?#2019-10-2222:12cjsauerAh yeah, good point. #2019-10-2220:07cjsauerI have a feeling tho that the sage advice might be to not store this type of data in datomic. I’m attempting to avoid bringing in another storage mechanism.#2019-10-2220:52favilawhy do you think there might be some special problem here?#2019-10-2222:14cjsauerJust looking ahead to see if this is a known bad idea. I’m gathering that it’s probably just fine tho. #2019-10-2222:26favilaa minor caveat is that noHistory is not a guarantee of no history ever, just that history will be dropped from indexes. so you may still see some history between the last indexing job and now#2019-10-2222:27favilaalso I don’t know if history disappears from transaction logs#2019-10-2300:43cjsauerI see in the docs that the indexes are stored in S3. Would it be correct to say that :db/noHistory = :db/noS3? Or just that the datom will eventually not exist in S3?
> The effect of :db/noHistory happens in the background
Maybe that’s what this means. The datom is eventually scrubbed from S3 in the background..?#2019-10-2300:47cjsauer> also I don’t know if history disappears from transaction logs
Looking in the docs again, this would mean that it’s still stored somewhere at the end of the day, yeah? DDB in this case. And these datoms would still show up in d/tx-range.#2019-10-2301:56favilaI don’t know the ins-and-outs of cloud#2019-10-2302:00favilafor on-prem, tx log data is written to storage and kept in peer+transactor memory until the next index kicks in. Reads transparently merge the last index time with the in-memory index derived from the tx log; but when the in-memory index is flushed to storage, the history of no-history attributes is not written. I don’t know if the transactor also takes the additional step of rewriting the tx-log to remove attribute history, but it seems unlikely to me. For cloud, I don’t know the precise mechanics of where that in-memory log goes or what precisely happens during indexing#2019-10-2302:01favilaanyway, this is easy to test. If it matters to you it’s probably better than listening to me speculate#2019-10-2302:03favilaactually only on-prem is easy to test. on cloud there’s no d/request-index, so you will have to induce it some other way (probably via lots of writes).#2019-10-2308:06mkvlrit will stay in history logs#2019-10-2516:45cjsauerThanks for the info guys 🙏#2019-10-2221:51Msr Timhow much of an AWS expertise does one need to run and maintian datomic . I finally setup a test production topology. it setup tons and tons of AWS things that i don't really grasp.#2019-10-2222:32Msr TimIs there a future posiblity of a hosted version of datomic#2019-10-2223:01favila…you mean on-prem?#2019-10-2312:21benoitI think he meant a version where all you have to do is get credentials, download the client and you're good to go. This is what I thought cloud was going to be initially. Right now, you still have a lot of moving pieces with all the AWS stuff you have to setup yourself.#2019-10-2313:08Msr Timyeah. Exactly.#2019-10-2313:09Msr TimI don't feel confident at the moment that i can maintain AWS setup on my own in a small team#2019-10-2313:10Msr Timmaybe someday if get up to speed with AWS properly#2019-10-2313:15favilaPossibly on-prem has a lower support burden? It is less embedded into aws#2019-10-2313:15favilarun a transactor, run a peer, use dynamo for storage#2019-10-2316:38Msr Timah.. maybe#2019-10-2316:38Msr Timbut i would really prefer not running my own database#2019-10-2313:24zachcpHi datomic users - anyone have any tips on early stage data modelling for Datomic? I’d be interested in blog posts about 1) the early design phase of a project prior to creation or 2) tools to faciliate exploration or schema creation (like the codeq schema https://github.s3.amazonaws.com/downloads/Datomic/codeq/codeq.pdf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAISTNZFOVBIJMK3TQ%2F20191023%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20191023T131443Z&X-Amz-Expires=300&X-Amz-SignedHeaders=host&X-Amz-Signature=51d040c7b4a25ac20cb3f81026f005b3b062f211d6fe22db2d1f17bfc54a3d9f)#2019-10-2313:36Alex Miller (Clojure team)we usually use Omnigraffle to make those datomic schema diagrams#2019-10-2313:40Alex Miller (Clojure team)the techniques here though are very similar to classical ER diagrams, with the distinction that Datomic is more flexible than your typical "tables" of ERD (attributes can be common across "entities", ref types refer to other entities directly, not via PK/FK, cardinality, components, etc)#2019-10-2313:40Alex Miller (Clojure team)most of that stuff can just be annotated on the diagram though. from a big picture you're still drawing tables and lines#2019-10-2314:21zachcpThanks @alexmiller. Do you have any suggestions on how to think about early stage data design - e.g. trade-offs around making your data model “flatter” or not. Or in your experience as you start modeling the data, a natural degree of partitioning begins to emerge.#2019-10-2314:29Alex Miller (Clojure team)in general, I find modeling with Datomic usually lets you be pretty close to a logical ERD and there is no reason not to break things out the way you like. you can think more "table"-like, but also do a mixture of graph-like things (and in my experience most enterprise apps are 85% "table"y and 15% "graph"y - Datomic gives you the best of both worlds)#2019-10-2314:30Alex Miller (Clojure team)it's definitely good to go as far as you can in diagrams before you ever write any code or schemas - changing diagrams is a lot faster :)#2019-10-2314:38zachcp:+1:#2019-10-2320:23schmeejust tried to run Datomic Pro 0.9.5981 locally, using client-pro 0.8.28, and I’m running into this issue: https://forum.datomic.com/t/ssl-handshake-error-when-connecting-to-peer-server-locally/1067#2019-10-2320:23schmeenot running in Docker, and setting :validate-hostnames false in the client config doesn’t do anything it seems#2019-10-2320:24schmeeI’m following this guide: https://docs.datomic.com/on-prem/getting-started/connect-to-a-database.html#2019-10-2320:44schmeeupgrading client-pro to 0.9.37 fixed the issue#2019-10-2415:45ssdevHey folks, I'm interested in trying datomic cloud, but want to test it out first. I'm going through the subscribe process now and seeing the estimated monthly costs at $118.00. Is there a way to test this out for free? Is this just an estimate based on receiving high traffic? I'm currently in the aws free tier period and hoping to just go through the ion tutorial without incurring any fees#2019-10-2415:48Joe Lane@UNCFMJ7QE the datomic solo topology should not cost $118.00. It should cost ~$1 per day, or roughly $30 per month.#2019-10-2415:49ssdevOk. I selected solo topology and seeing this, but I'll just assume and hope it's over-estimating#2019-10-2415:50kennySolo should run on a t2.small. Not sure why that is staying i3.large.#2019-10-2415:50Joe LaneI do not think you selected solo. The solo topology should use a t3.small#2019-10-2415:50Joe Laneyeah. (they bumped from t2 to t3).#2019-10-2415:52ssdevhere's a full page screen shot -#2019-10-2415:52ssdev#2019-10-2415:53Joe Lane@U1QJACBUM ^^ Might want to check that out#2019-10-2415:54Joe Laneeither way @UNCFMJ7QE , I think later on in the process when you actually select which cloudformation template to use, if you pick the solo cloudformation template that is what should be used.#2019-10-2415:56ssdevoooo k. I'll continue on and cross my fingers and light some incense#2019-10-2415:57jaretyeah, that is probably an error on the AWS page. Let me test.#2019-10-2416:00jaretYep, I just confirmed, it actually launches a SOLO template and uses the t3.small, but the calculator and marketplace listing seem to be wrong.#2019-10-2416:00jaretIronically, the Prod template shows the “solo” calculation#2019-10-2416:00jaret#2019-10-2416:10jaretI’ve logged a request with AWS Marketplace to fix @UNCFMJ7QE, but the estimate you see on the marketplace page is flipped. You can look at production to see solo or look at a previous version of the software to see the correct estimates. Sorry about the confusion.#2019-11-0115:39ssdevHey @U1QJACBUM have you heard back form aws concerning this switch? We were also looking at query group prices today and wondering if the prices quoted are correct or not. Also curious what the difference is between "Query Group 1" / "Query Group 2", "Production 1" / "Production 2"?#2019-10-2418:28pvillegas12Can I have a Solo Topology with two t2.medium instances to allow me to deploy without taking my service down or do I have to use the Production Topology instead (this would allow me de deploy and it would be able to serve traffic in between? )#2019-10-2421:07stuarthallowayNot at present, but we understand that use case and have been thinking about it.#2019-10-2420:03cgrandI’m encoutering a weird behavior: it seems like :db/ident go into a cache and they are never removed from the cache:#2019-10-2420:04favilathat is correct#2019-10-2420:04favilad/entid uses this cache#2019-10-2420:04favilait’s so you can rename an ident without altering your code#2019-10-2420:05cgrandd/transact too#2019-10-2420:05favilaeverything flows through d/entid#2019-10-2420:05cgrandI want to get rid of an attribute and be sure it’s not used anew#2019-10-2420:05favilaexcept some query clauses#2019-10-2420:06favilaput that same ident on a non-attribute; now any attribute-like use of it will fail#2019-10-2420:07cgrandok thanks for the workaround#2019-10-2420:07favilayou can retract afterwards#2019-10-2420:08favilathink of the ident cache as a map of key to eid which only assocs assertions and ignores retractions#2019-10-2423:59Luke SchubertI'm curious is there a general ballpark pricing for datomic on prem enterprise/OEM?#2019-10-2509:23Shaitanhow to limit search for a particular day? I have field in the entity with type :db.type/instant.#2019-10-2509:57souenzzo@kalaneje you can (d/q '[:find ?id :in $ ?limit :where [_ :user/id ?id ?tx] [?tx :db/txInstant ?inst] [(> ?inst ?limit)]] db #inst"...")
Will return all user-id's that was transacted before #inst".."#2019-10-2511:15magnarsI'm currently at a client using a very old Datomic version (0.8.4138) - and was wondering how I should go about updating. Could I safely bump the transactor version while staying on the old client API? Or the other way around? Or do I need to time it exactly to upgrade both at the same time?#2019-10-2511:20favilaNormally you can update peer and txor versions in any order except in the cases mentioned here https://docs.datomic.com/on-prem/release-notices.html#2019-10-2511:21favilaHowever that version is so old I recommend a backup, shutdown, upgrade, and restore if you can get away with it#2019-10-2511:21magnarsThanks, that makes sense. 👍#2019-10-2519:23markbastianIf I have a Datomic Cloud system that I am not currently using do I just stop the instances of the system + bastion to prevent being charged for it or do I need to do anything else, like delete the stack?#2019-10-2519:25Joe Lane@markbastian modify the autoscaling group by setting 0 min instances and 0 desired instances on both the bastion and other nodes. That will bring them all down. It wont save all the cost because you're still paying for storage of existing data, but it's as cheap as I think you can get.#2019-10-2519:26jaretand @markbastian if you want to get rid of the storage cost etc and totally remove datomic you can follow this doc#2019-10-2519:26jarethttps://docs.datomic.com/cloud/operation/deleting.html#2019-10-2813:53onetomis it possible to use enums in tuple lookup refs?
eg, this works:
(d/entity db [:rule/algo+expr [17592186045417 "XXX"]])
but if i use an ident ("enum") in place of that eid, then i just get nil:
(d/entity db [:rule/algo+expr [:rule.algo/regex "XXX"]])
where
(d/pull db '[*] :rule.algo/regex)
=> #:db{:id 17592186045417, :ident :rule.algo/regex}
#2019-10-2814:09marshallErm. That seems like it should work. Let me look into it#2019-10-2815:55onetomin my specific use-case, i think i can use a keyword instead of a ref, but it still looks like a bug and i suspect there are legitimate use-cases which might want to do this#2019-10-2816:08onetomhmmm... im still not sure about how the lookup-ref should look like 😕
im getting this error, when I'm trying to transact:
{:txn/id txn-id
:txn/matches [[:rule/algo+expr [:regex expr]]]}
Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:79).
:db.error/not-an-entity Unable to resolve entity: [:rule/algo+expr [:regex "XXXX"]] in datom [-9223301668109555930 :txn/matches [:rule/algo+expr [:regex "XXXX"]]]
#2019-10-2816:09onetomwhere :txn/matches is
{:db/ident :txn/matches
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many}
#2019-10-2816:47onetomlookup ref works when using the datom style:
(tx [[:db/add "x" :txn/id 1]
[:db/add "x" :txn/matches [:rule/algo+expr [:regex "XXX"]]]])
but fails with :db.error/not-an-entity Unable to resolve entity: :regex when using the entity-map style
(tx [{:txn/id 1
:txn/matches [:rule/algo+expr [:regex "XXX"]]}])
#2019-10-2816:48onetom(at least when the tuple attr's 1st element is a keyword, not a ref)#2019-10-2816:49marshallfor the card-many you need an extra [] around it#2019-10-2816:50marshallhm. or maybe not#2019-10-2816:50marshallwhats’ the schema definition of :rule/algo+expr#2019-10-2816:55onetom{:db/ident :rule/algo+expr
:db/valueType :db.type/tuple
:db/tupleAttrs [:rule/algo :rule/expr]
:db/unique :db.unique/identity
:db/cardinality :db.cardinality/one}
#2019-10-2816:56onetomand yes, i've tried with and without and extra bracket and it works both ways when using entity-map style and only works without when using datom-style, which is quite logical#2019-10-2816:56marshallright#2019-10-2813:53onetomit looks like that :rule.algo/regex is treated just as a scalar (keyword) type#2019-10-2813:54onetommy schema looks like this:
{:db/ident :rule/algo
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :rule.algo/regex}
{:db/ident :rule.algo/substr}
{:db/ident :rule/expr
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
{:db/ident :rule/algo+expr
:db/valueType :db.type/tuple
:db/tupleAttrs [:rule/algo :rule/expr]
:db/unique :db.unique/identity
:db/cardinality :db.cardinality/one}
#2019-10-2813:57onetomin the example from the docs (https://docs.datomic.com/on-prem/schema.html#composite-tuples)
there is this txn:
[{:reg/course [:course/id "BIO-101"]
:reg/semester [:semester/year+season [2018 :fall]]
:reg/student [:student/email "
where :fall is one of the tupleAttrs, but its type is just :db.type/keyword
{:db/ident :semester/season
:db/valueType :db.type/keyword
:db/cardinality :db.cardinality/one}
#2019-10-2814:07onetomthe same doc page further down says:
### External keys
All entities in a database have an internal key, the entity id. You can use :db/unique to define an attribute to represent an external key.
An entity may have any number of external keys.
External keys must be single attributes, *multi-attribute keys are not supported*.
#2019-10-2814:08marshallWell, that's not exactly true anymore bc of tuples#2019-10-2814:09onetomok, then i understood it correctly#2019-10-2814:14onetomsince we are talking about tuples, i've also noticed that datomic-free doesn't support tuple value types.
is it going to be updated or tuples are a pro-only feature?#2019-10-2814:28akielI have asked Cognitect regarding this issue. The answer was that they don’t plan to add features to the free edition at the moment.#2019-10-2814:34akielYou can use the Starter Edition, which is also free.#2019-10-2814:35onetomsure, it's just a bit more troublesome to download for a team, which is just about to learn datomic aaand clojure at the same time... from me...#2019-10-2814:43akielI know and I also don’t like it. It would be good to write a mail to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>, explaining your situation. Doing so may help to change things.#2019-10-2815:07onetomWhat would you propose as an alternative?
I'm not sure how could the situation be improved.
It's an awesome technology, so I understand why Cognitect is keeping it on a short leash...
The client lib is downloadable without hassle at least.#2019-10-2815:11onetomI would be happy with the free version too, btw, but since I've diligently read thru the last 3 years of changelogs and learnt about the tuple support, now I want it badly :)
But I guess I might just step back a bit and use txn functions to implement composite keys, like 3 years ago...#2019-10-2815:16souenzzoWithout free edition makes harder to have awesome tools like https://github.com/vvvvalvalval/datomock and https://github.com/ComputeSoftware/datomic-client-memdb
Also harder to convince people to use/learn it.
It goes from "way easier to configure then SQL: just add the dependency and use it"
to "oh you will need to crete a account, add a custom repo and it's credentials. You cannot commit your credential. Then you will have access to one year of updates"...
😞#2019-10-2815:21onetomWhich process - I guess - acts as a filter or throttle and only seriously interested ppl bother with using Datomic#2019-10-2815:23onetomI agree, it's a pity, but I'm still very grateful that Datomic exists at all :)#2019-10-2815:27onetomI was also pleased to see that tools.deps takes the ~/.m2/settings.xml file into account and it's even explained how to separate your login credentials from the per-project maven repo settings in your deps.edn#2019-10-2815:31onetomAll this info is a little too scattered and requires a lot of background knowledge and I feel bad about it, because I have to explain all these quirks to my colleagues too.
I'm sure they will ask "how am i supposed to discover all this on my own" and they will feel insecure if I have to tell them that they indeed would have a hard time doing this alone...#2019-10-2815:34onetomIm planning to have datascript around too, so they can quickly experiment, but I'm not sure how different is it from Datomic, coz I never used it...#2019-10-2815:38souenzzoDatascript had used datomic-free to check if it implement's some features/behavior close to datomic
unfortunately it can't be done with new datomic features...
non-free datomic is about kill it's small community 😞 (yes, I as previously peer and now a cloud consumer REALLY sad with it)#2019-10-2815:40onetomwhy so sad about the cloud version?#2019-10-2816:07souenzzoDespite working in a prime region of a Brazilian capital, I have many issues with my(all available in my region) ISP. I already lost many days of work due no-internet-connection
at the beginning of my current project, i used datomic-client-memdb to work offline and datomock to create/reuse test cenarios
After the last datomic update, everything was broken. I needed to re-write my test cenarios and I'm unable to work offline
Also, moving from datomic-client-memdb to "client proxy", my deftest goes from 0.1ms to 10s. "run all tests" from 1m to 10m (and it FAIL when my internet goes down)#2019-10-2818:03cjsauer@onetom have you researched datahike? This might be a good middle-ground for your students. https://github.com/replikativ/datahike
I’ve been considering switching to it myself for all the same pains that @U2J4FRT2T is feeling, and the fact that it’s open source. Datomic appears entirely uninterested in fostering a community, and so my long-term bet is on something like datahike.#2019-10-2818:20onetomNo, I have not encountered datahike yet. Thx for putting it on my radar!
I also have issues with my internet connectivity (I live on Lamma island in Hong Kong and only get a 0.5-3Mbit/s usually)...#2019-10-2814:14onetomalso the latest changelog link (https://my.datomic.com/downloads/free/0.9.5703.21/changes) is broken on the https://my.datomic.com/downloads/free page#2019-10-2814:28akielThis issue is also known. The last update to the free edition is about one year old.#2019-10-2814:37onetom@zach.charlop.powers @alexmiller u were talking about data modeling the other day.
what's your take on Hodur?
https://www.youtube.com/watch?v=EDojA_fahvM&t=1120s#2019-10-2814:41onetomand the repo is this i guess:
https://github.com/hodur-org/hodur-engine
plus the visualization UI:
https://github.com/hodur-org/hodur-visualizer-schema#2019-10-2814:47Alex Miller (Clojure team)sorry, don't know anything about it#2019-10-2814:53onetomregardless, thank you for strangeloop!
i've learnt immense amounts from it.#2019-10-2814:52zachcpI haven’t used it but I’ll take a look. thanks @onetom#2019-10-2815:04cjsauerIs there a way to bind a whole datom to a logic variable in a query? Something similar to :as, e.g. :where [[?e ?a ?v :as ?datom]]. I’m looking for an alternative to d/filter given its unavailability in Cloud, and am thinking that I could use rules in order to simulate its effect.#2019-10-2815:18souenzzo[(tuple ?a ?b ?c) ?datom] [(valid? $ ?a ?b ?c)]
Not sure about performance#2019-10-2815:32cjsauerAh tuple, I kept trying to destructure with []. Still running into this tho:
"[?a ?v ?e] not bound in expression clause: [(tuple ?e ?a ?v) ?datom]"
#2019-10-2815:34souenzzotuple is a new feature from datomic. from the last release#2019-10-2815:34souenzzo[(valid? $ ?datom)] *#2019-10-2816:23cjsauerThanks @U2J4FRT2T, I got a few queries working. You can actually bind the tuple components first before aggregating them, which allows for unification to work in both directions, e.g.
:in $ % ?user
:where
[?e ?a ?v ?tx ?op]
[(tuple ?e ?a ?v ?tx ?op) ?datom]
(authorized? ?datom ?user)
Performance is probably less than ideal tho, like you mentioned.#2019-10-2817:13souenzzoyou can use [?a :db/ident] to avoid "full db scan" error#2019-10-2816:56cjsauerRelated to the above, how big of a bribe is required to get d/filter support in Cloud? 😜
Would be such an amazing way to handle authorization in Ion applications. I can imagine filtering the database per user request based on some authorization rules, which would prevent one from needing to enforce those rules ad-hoc all over the system.#2019-10-2818:24onetomif i have card-many attribute, how can i constrain my results based on its cardinality?
(something like the HAVING clause in SQL)?
the one stackoverflow article i found on this topic recommends nested queries
(d/q '[:find [(pull ?e [* {:txn/matches [*]}]) ...]
:with ?e
:where
[?e :txn/matches ?m]
[(count ?m) ?matches]
[(< 1 ?matches)]])
#2019-10-2819:43favilaanother option is d/datoms with a bounded-count for this simple case, anything harder needs a subquery because you cannot perform aggregation before the :find stage#2019-10-2900:00pvillegas12I want to upgrade from Solo -> Production, but my datomic database is currently serving a paid product. Is there a way to make this operation reversible in case it does not work as expected?#2019-10-2907:31dmarjenburghYou can probably update the stack with the solo template to revert.#2019-10-2900:00pvillegas12Has anyone encountered a problem where your whole system goes down when a code deploy is initiated by a Autoscaling group action? This event is taking down our system which is then restored if we use another deployment.#2019-10-2905:03xiongtxI'm wondering why the #db/fn reader macro doesn't work with clj code, only EDN.
#db/fn {:lang "clojure"
:params []
:code (inc 1)}
when evaluated gives
Can't embed object in code, maybe print-dup not defined:
which IIUC means that it's trying to eval the delay, which I'm not sure why is happening.
The ❓ has been asked previously here, but w/out an answer: https://clojurians-log.clojureverse.org/datomic/2016-01-02/1451699503.001427#2019-10-2905:36hiredmanbecause the compiler generates bytecode and doesn't know how to embed arbitrary objects (like the delay) in bytecode. if there is no special casing of how to embed some object in bytecode, the compiler falls back to calling pr, embedding the string, and then calling read-string when the bytecode is run#2019-10-2905:38hiredmansame thing user=> (defmacro f [] (delay nil))
#'user/f
user=> (fn [] (f))
Syntax error compiling fn* at (REPL:1:1).
Can't embed object in code, maybe print-dup not defined:
#2019-10-2909:04sooheonI’m not getting results binding :db.type/float values, like (d/q '[:find ?e :where [?e :some/attr 1.0]] db), only for some values. I.e. I know that valid values are (1.0 2.0 3.5), and only 2.0 returns results in that query. What could I be missing?#2019-10-2909:11sooheonI now see that the following query works:
(d/q '[:find (pull ?e [*])
:where
[?e :some/attr ?v]
[(> ?v 3.4)]
[(< ?v 3.6)]]
db)
So it seems like a floating point error issue for exact comparisons. I guess I should be using :db.type/bigdec if I care about writing queries for exact values?#2019-10-2911:55pvillegas12Our Datomic Cloud Solo System is failing completely, the API times out with a 504. How can we go about debugging this in AWS?#2019-10-2912:04dmarjenburghTry to pinpoint where it goes wrong first and whether it’s a Datomic issue or an ApiGateway configuration.
- What happens when you invoke the lambda directly instead of through apigw?
- Can you connect to the database through the bastion?
- Do the datomic CloudWatch logs have anything unusual?#2019-10-2912:16pvillegas12CloudWatch logs don’t show anything, that is the unusual part, they start not reporting anything about datomic#2019-10-2912:16pvillegas12I’m going to try 1-2 to replicate#2019-10-2912:58marshallif your datomic system cloudwatch logs just “stop” you should forcibly restart your compute instance#2019-10-2912:59marshallhttps://docs.datomic.com/cloud/troubleshooting.html#troubleshooting-solo#2019-10-2917:44madstapI'm trying to shovel some data from kafka into datomic cloud. Is there a ready made kafka connect sink for datomic cloud or should I write my own?#2019-10-2918:11BrianI'm wondering what the best way is to validate my data when working with Datomic Cloud. I have a sha-256 hash that I want to check if it is actually a valid sha-256 before inserting it. I have a function written. Should I do that check manually before inserting it or is there a way to have rules on certain attributes?#2019-10-2918:12marshall@brian.rogers https://docs.datomic.com/cloud/schema/schema-reference.html#attribute-predicates#2019-10-2918:12BrianThank you!#2019-10-2918:29Brian@marshall I get a 'hash/sha-256?' is not allowed by datomic/ion-config.edn - {:cognitect.anomalies/category :cognitect.anomalies/forbidden, :cognitect.anomalies/message \"'hash/sha-256?' is not allowed by datomic/ion-config.edn\", :dbs [{:database-id \"<id>\", :t 24, :next-t 25, :history false}]}"}}. Does this indicate that I need to push that function up as a transaction function?#2019-10-2918:30marshall@brian.rogers yes, “Attribute predicates must be on the classpath of a process that is performing a transaction.” - for Cloud that means they need to be ions#2019-10-2918:30BrianSweet thank you =]#2019-10-2918:30marshallnp#2019-10-2919:52jjttjjIs there a way to combine the results of these two queries in a single query? providing a default value for "status" when the join cannot be made? I've been missing with get-else but I don't think it's exactly what I need here
;;find placed orders requests that have not been acknowledged with a
;;received status message
(d/q
'[:find ?oid
:where
[?e :iboga.req.place-order/order-id ?oid]
(not [?status-msg :iboga.recv.order-status/order-id ?oid])]
(d/db DB))
;;find placed orders requests that have been acknowledged with a
;;received status message and join with status
(d/q
'[:find ?oid ?status
:where
[?e :iboga.req.place-order/order-id ?oid]
[?status-msg :iboga.recv.order-status/order-id ?oid]
[?status-msg :iboga.recv.order-status/status ?status]]
(d/db DB))
#2019-10-2920:10benoitSomething like this might work:
(d/q
'[:find ?oid ?status
:where
[?e :iboga.req.place-order/order-id ?oid]
(or-join [?oid ?status]
(and [?status-msg :iboga.recv.order-status/order-id ?oid]
[?status-msg :iboga.recv.order-status/status ?status])
(and (not [?status-msg :iboga.recv.order-status/order-id ?oid])
[(ground :none) ?status]))]
(d/db DB))
But I wonder why your status message entity cannot point directly to the order entity. Why do you have to do this "join" on the order id value.#2019-10-2920:13benoitThis (not [?status-msg :iboga.recv.order-status/order-id ?oid]), in particular, might be inefficient.#2019-10-2920:16jjttjjThat works, thanks! So you mean just having the order-id be a :db/unique attribute so all the attributes above point to the same entity, then just doing get-else for the status?#2019-10-2922:35benoitNo, I mean having an attribute :iboga.recv.order-status/order that directly points to the order entity. Why having to use the "order id" value to connect the two entities?#2019-10-2921:37schmeecan you use attribute predicates with database functions, or does it only work with functions on the classpath?#2019-10-3018:17ssdevHey folks, noob question for ya. I'm messing around with web service ions currently and am curious how I can develop locally so I can see what happens when a web request comes in. Is there a way to run these functions locally?#2019-10-3104:44dmarjenburghYou can run a local web server (e.g. Jetty) with your ring handler. Usually the ion handler is just wrapping the ring handler with datomic.ion.lambda.api-gateway/ionize. See https://docs.datomic.com/cloud/ions/ions-reference.html and the ion-starter project.#2019-11-0116:32ssdevcool. Thanks @U05469DKJ#2019-10-3022:18daniel.spanielanyone else found that they cant use cloud connection to datomic ( in the last month or 2 something changed ) and now the connection I make from development to the cloud db hangs after the first try. does 1 hit and then refuses to do more. so odd .. so debilitating#2019-10-3022:25daniel.spanielthe connection hangs with this call <ws://127.0.0.1:9630/ws/worker/main/bd3f3e48-e8b8-438e-840c-61ae23f451cf/33666b32-f7f8-45d7-bbe6-7d54f906fa94/browser>#2019-10-3022:25daniel.spanielvery interesting#2019-10-3022:38souenzzo@dansudol this 9630 port looks like #shadow-cljs stuff. #shadow-cljs should be use in dev-time only#2019-10-3022:41daniel.spanielI know your right. I killed shadowjs but it still hanging#2019-10-3022:42daniel.spanielnot sure how this ever worked before because we used to develop off a cloud connection running shadowjs too .. bizzarre#2019-10-3022:42daniel.spanielwe use mem db now locally so its been a while#2019-10-3105:08onetomdo i see it correctly that the on-prem datomic doesn't provide nested queries, via a bulit-in q query function?
the could version's documentation mentions this feature at:
https://docs.datomic.com/cloud/query/query-data-reference.html#q
(d/q '[:find ?track ?name ?duration
:where
[(q '[:find (min ?duration)
:where [_ :track/duration ?duration]]
$) [[?duration]]]
[?track :track/duration ?duration]
[?track :track/name ?name]]
db)
#2019-10-3105:13onetomah, nvm, i haven't realized that i have to quote the inner q's query parameter too.
here is the most minimal example i could come up with (which works on an empty db too):
(d/q '[:find (pull ?e [*])
:where
[(q '[:find ?x . :where [(ground :db/doc) ?x]]) ?x]
[?e :db/ident ?x]]
(d/db conn))
#2019-10-3105:22onetomso this built-in q function is not in the on-prem docs.
it should come after https://docs.datomic.com/on-prem/query.html#missing to be consistent with the cloud docs.
where can i report such documentation issues?#2019-10-3106:08csmYou can use datomic.api/q within a query. It’s not a “built-in” function, but you can use it like you can use any function on your class path.#2019-10-3114:52favilaQuery forms are evaluated as if in an environment with (require '[datomic.api :as d :refer [db q])#2019-10-3114:53favilathat’s why bare “q” works and seems special#2019-10-3114:53favilaIt’s really datomic.api/q#2019-10-3115:15onetomah, i see!
so, in the cloud version's doc it's important to highlight this, since in such a setup, the query is not running in the app's process?#2019-10-3115:24favilayes; you have no control over requires or ns aliases in the cloud whereas you do on on-prem. Although even in cloud I think it will auto-require fully qualified ns vars, so you can add custom functions to the classpath? I know this happens for transactions, not sure for queries#2019-10-3116:43Oleh K.Guys, can I connect to datomic cloud from multiple services via .<system>.<aws_zone>. ? Currently when my one service is connected to datomic another one cannot#2019-10-3116:55onetomwhat is the error message u get?#2019-10-3116:56Oleh K.[org.eclipse.jetty.client.HttpClientTransport:149] - Could not connect to HttpDestination[.<system>.]6e48b9ed,queue=1,pool=DuplexConnectionPool[c=1/64,a=0,i=0]#2019-10-3116:57Oleh K.<system> is a real name#2019-10-3116:58Oleh K.the service is running in the same instance as the main one (in datomic vpc)#2019-10-3117:03Oleh K.it's also a Solo topology, if it makes difference (don't see anything about that in the documenation)#2019-10-3117:06onetomdoesn't sound like a datomic related issue to me.
can you try to just directly access that endpoint with netcat from the same machine where that "other service" can not access it from?
nc entry.<system>. 8182#2019-10-3118:22jherrlinHey. How can I limit the number of nested results using pull? I have 3 enteties, each one with :db.type/ref / :db.cardinality/many attribute. When pulling from Datomic I never get results because the enteties have relations to each other and i assume its trapped in an infinity loop. I am only interested in the the first line of relations.#2019-10-3118:22jherrlinHey. How can I limit the number of nested results using pull? I have 3 enteties, each one with :db.type/ref / :db.cardinality/many attribute. When pulling from Datomic I never get results because the enteties have relations to each other and i assume its trapped in an infinity loop. I am only interested in the the first line of relations.#2019-10-3118:48jherrlinFound the solution to my answer here: https://docs.datomic.com/cloud/query/query-pull.html#orga9eca04#2019-10-3119:54jherrlinHmm it didnt solve my problem. Dont really grasp what it did though#2019-10-3123:06cjmurphyYou can have pull syntax that recurs only as much as you need. So you might have my-entity-pull-1 that refers to my-entity-pull-2, that refers to my-entity-pull-3. Here my-entity-pull-3 would only have non-references in it. That's how I've limited the recursion, for 'my-entity' in this case.#2019-10-3120:17bartukahi, I'm having some issues using datomic with core.async. I have the following code:
(let [in (async/chan 200)
out (async/chan 200)]
(async/pipeline 4 out (map compute-metrics) in)
(async/go (doseq [item items] (async/>! in item)))
(async/go-loop []
(println (async/<! out))
(recur)))
And the compute-metrics function, basically saves an item into datomic (after performing a simple computation on one field). I am using the client.api.async to save the item. It seems to work just fine if the parallelism parameter is lower than 5 [for a 120 itens on my input list] but higher than that it only stuck after computing the first 8 items.#2019-10-3120:20Alex Miller (Clojure team)can you reproduce if you use pipeline-blocking instead?#2019-10-3120:21bartukaI had the same issue using pipeline-async but haven't tried the blocking version#2019-10-3120:21bartukaI might be able to run it very quickly here, brb#2019-10-3120:23Alex Miller (Clojure team)I'm certain that the issue is that the go block threads are all blocked#2019-10-3120:23Alex Miller (Clojure team)so a thread dump would reveal what blocking op they are blocked on#2019-10-3120:24bartukayes, just worked (Y)#2019-10-3120:24Alex Miller (Clojure team)there is actually a problem with pipeline that it uses a blocking op inside a go block that I just fixed this week (not yet released) but the fix basically makes it work like pipeline-blocking#2019-10-3120:24bartukacan you help me understand a little better this process?#2019-10-3120:25Alex Miller (Clojure team)so yeah, this is a bug in core.async that I'll release soon#2019-10-3120:27bartukaahn, ok! I was fighting with this problem the whole day rsrrs at least I learned a lot about async processes#2019-10-3120:28Alex Miller (Clojure team)I am also working on a way to detect this sort of thing in core.async (which is how I found the bug in the first place)#2019-10-3120:28bartukaIf I used the datomic sync api I would have succeeded too?#2019-10-3120:30Alex Miller (Clojure team)no, I don't think that would have helped here. really, if you're using the async api, you should be able to use pipeline-async I think#2019-10-3120:32bartukaI see, but when I take a connection from the channel returned by (d-async/connect) it has different properties than the sync version? I could not find much info about the distinction of these two libraries to be honest#2019-10-3120:37Alex Miller (Clojure team)sorry, I'm not much of an expert on this particular area#2019-10-3120:38bartukanp, thanks for the help.. saved the day o/#2019-11-0100:30QuestIs it possible to use wildcard matching as part of a tuple value? example query against 2-element homogenous tuple:
[:find ?e
:where [?e :nsm.entity/form [:todo _]]]
I want to match all entities with :todo as the first tuple element regardless of the value of the second element. Currently this query always returns an empty set.#2019-11-0110:14onetomsorry, forgot to mention u.
see my suggestion after your question#2019-11-0122:31QuestI can confirm the [((comp #{:todo} first) ...)] solution as working. Thanks @U086D6TBN!#2019-11-0100:43Quest^Behavior reproduces on latest version datomic-pro-0.9.5981#2019-11-0103:55onetomhow about something like
[:find ?e
:where
[?e :nsm.entity/form ?forms]
[((comp #{:todo} first) ?forms)]
#2019-11-0104:46Joe LaneI think you want untuple#2019-11-0122:30QuestI can confirm untuple works in the following query:
'[:find ?e
:where
[?e :nsm.entity/form ?tup]
[(untuple ?tup) [?a ?b]]
[(= ?a :todo)]]
Thanks Joe!#2019-11-0108:45NemsHi everyone, after adding a private maven repo to my ions deps.edn I can't run the "clojure -A:dev -m datomic.ion.dev '{:op :push :creds-profile "dev" :region "eu-central-1"}'" command anymore. I always get the following error:
Downloading: com/datomic/java-io/0.1.11/java-io-0.1.11.pom from <s3://datomic-releases-1fc2183a/maven/releases/>
{:command-failed
"{:op :push :creds-profile \"rsdev\" :region \"eu-central-1\"}",
:causes
({:message
"Failed to read artifact descriptor for com.datomic:java-io:jar:0.1.11",
:class ArtifactDescriptorException}
{:message
"Could not transfer artifact com.datomic:java-io:pom:0.1.11 from/to roots (): status code: 401, reason phrase: Unauthorized (401)",
:class ArtifactResolutionException}
{:message
"Could not transfer artifact com.datomic:java-io:pom:0.1.11 from/to roots (): status code: 401, reason phrase: Unauthorized (401)",
:class ArtifactTransferException}
{:message "status code: 401, reason phrase: Unauthorized (401)",
:class HttpResponseException})}
If I remove the private repo and the dependency of that repo it works again.#2019-11-0112:34marshallsee https://forum.datomic.com/t/issue-retrieving-com-datomic-ion-dependency-from-datomic-cloud-maven-repo/508/6 and https://forum.datomic.com/t/iam-permissions-to-access-s3-datomic-releases-1f2183a/861#2019-11-0112:59Alex Miller (Clojure team)By "private repo", I assume you mean one with creds in settings.xml? If so, can you successfully download deps using clj/deps.edn from it separately from the ion setup?#2019-11-0208:28NemsHi @U064X3EF3, yes a private repo with settings.xml. If we run clojure -A:dev all the dependencies get downloaded without an issue. It's only when we run clojure -A:dev datomic.ion.dev '{...}' that we run into this problem. I've tried following both links that @U05120CBV posted but they don't seem to solve the issue.#2019-11-0108:46NemsHere's my deps.edn if that helps (with private maven repo)
{:mvn/repos {"datomic-cloud" {:url ""}
"roots" {:url ""}}
:paths ["src" "resources"]
:deps {org.clojure/clojure {:mvn/version "1.10.0"}
org.clojure/data.zip {:mvn/version "0.1.3"}
org.clojure/data.xml {:mvn/version "0.2.0-alpha6"}
org.clojure/core.async {:mvn/version "0.3.442"
:exclusions [org.clojure/core.memoize]}
org.clojure/core.memoize {:mvn/version "0.7.2"}
com.datomic/ion {:mvn/version "0.9.35"}
cheshire {:mvn/version "5.8.1"}
clj-http {:mvn/version "3.10.0"}
com.cognitect.aws/api {:mvn/version "0.8.305"}
com.cognitect.aws/endpoints {:mvn/version "1.1.11.559"}
com.cognitect.aws/sqs {:mvn/version "697.2.391.0"}
com.cognitect.aws/s3 {:mvn/version "718.2.457.0"}
com.cognitect.aws/ssm {:mvn/version "718.2.451.0"}
medley {:mvn/version "1.2.0"}
camel-snake-kebab {:mvn/version "0.4.0"}
byte-transforms {:mvn/version "0.1.4"}
be.roots.mona/client {:mvn/version "1.65.5-168"}
}
:aliases {:dev {:extra-deps {com.datomic/ion-dev {:mvn/version "0.9.234"
:exclusions [org.slf4j/slf4j-nop]}
com.amazonaws/aws-java-sdk-sts {:mvn/version "1.11.210"}}}
:config {:extra-deps {com.cognitect.aws/sts {:mvn/version "697.2.391.0"}}
:extra-paths ["config"]}
:local {:extra-deps {com.datomic/client-cloud {:mvn/version "0.8.78"}
com.cognitect.aws/sts {:mvn/version "697.2.391.0"}
ch.qos.logback/logback-classic {:mvn/version "1.2.3"}
ch.qos.logback/logback-core {:mvn/version "1.2.3"}
org.clojure/test.check {:mvn/version "0.9.0"}
org.clojure/tools.namespace {:mvn/version "0.3.0-alpha4"}}
:extra-paths ["dev" "sessions" "test-resources"]}}}#2019-11-0109:33avfonarevWhat is the best approach when it comes to storing ordered data in Datomic? Say, someone wants to write yet another todo list app, where items can be reordered in a given list.#2019-11-0111:19octahedrion@avfonarev assert each item like {:index i :item item}#2019-11-0113:40refset@avfonarev to add to this suggestion, you may also want to consider using bisection keys rather than numbers e.g. https://github.com/Cirru/bisection-key (I've successfully used this with DataScript before, for modelling ordered lists)#2019-11-0114:34avfonarevThat is what I was leaning to. One can use amortization to reduce the number of writes per item this was.#2019-11-0113:25bartukaI am using datomic analytics through presto and I deleted the database that was connected to it. After I recreate and populate the new database, presto cannot perform any query, always returning Datomic Client Exception error#2019-11-0114:28marshallrestart your presto server#2019-11-0114:29marshallif you’re using cloud you can use the datomic-gateway script to restart the access gateway#2019-11-0114:29marshallif you’re using on-prem, just kill and restart the presto server#2019-11-0119:19bartukaI see, thanks marshal!! I will post your response into datomic dev forum so other people may benefit from this as well#2019-11-0113:25bartukathe problem is certain related, but now sure how to proceed on that#2019-11-0116:12ssdevQuick question: is there any restriction around using Lambdas & HTTP Direct at the same time?#2019-11-0116:45dmarjenburghAn apigateway method integration has either a lambda proxy or a vpc link of course, but datomic supports both at the same time. Note that the apigw lambda event data will not be present in the http direct request#2019-11-0116:37Drew VerleeAttempting to call datomic.api/connect with my database uri string results in
Execution error (NullPointerException) at datomic.kv-cluster/kv-cluster (kv_cluster.clj:355).
null
if anyone has an idea what that implies it would be a big help. i assume i have a connection issue, configuration of the connection string or a networking issue.#2019-11-0116:41Oleh K.Does datomic cloud have REST API?#2019-11-0118:32chagas.visHello everyone, I am currently studying how I can use Clojure to a bioinformatics project. One of the first problems that I find is the lack of a library to work with data frames, I did some Google search but I did not find any information about some Clojure library that has any implementation like pandas (Python) or R.#2019-11-0121:13zaneIs anyone aware of open source compilers that compile a (subset, certainly) of SQL to Datalog?#2019-11-0211:49Mark AddlemanNot directly. I'm a fan of http://teiid.org/ which allows you to SQL-fy just about anything. I believe https://prestodb.github.io/ has similar capabilities.#2019-11-0222:05Quest@zalky Found your question in the Zulip logs, reposting & answering it here because this limitation just came up for me.
I noticed that the latest versions of Datomic have new tuple types. Reading through the docs, can anyone clarify whether homogeneous tuple types have the same 2-8 element length limitations as the other two tuple types? It says homogenous tuples have "variable" length. It's just a little ambiguous.
Answer: homogenous tuples are subject to the same 2-8 element limitations.
Attempting to set vectors with count less than 2 or greater than 8 will produce the following exception.
java.lang.IllegalArgumentException: :db.error/invalid-tuple-value Invalid tuple value
Tested on datomic-pro-0.9.5981. The best workaround I have now is to pad tuple values with nil in order to reach the minimum length of two -- ex: :tags ["foobar" nil]#2019-11-0414:09zalkyMuch respect for the follow up!#2019-11-0318:20alidlorenzofor aws, the atomic solo 1 monthly estimate is $118 whereas the production 1 is $21, is this correct? I thought solo was more affordable/suitable for getting started - did this change?#2019-11-0318:21marshallThis is an error in marketplace#2019-11-0318:21marshallThe estimates should be reversed#2019-11-0318:21marshallWe are working with aws to correct#2019-11-0322:28Jon WalchWould you mind letting me know when this is corrected? I'd like to stand up a Solo 1 ASAP#2019-11-0323:15marshallYou can use solo as it is#2019-11-0323:15marshallThe price you are charged is correct#2019-11-0323:16marshallIts just the display on the marketplace site that is wrong#2019-11-0323:16Jon WalchThe CloudFormation template for Solo is also making reference to the instance sizing in production 1#2019-11-0402:33Jon WalchI just went through the process for setting up a Solo 1. If I look at my EC2 instances, my bastion is a t3.nano and then I have two i3.large instances? Is this correct?#2019-11-0412:15marshallThat is not correct#2019-11-0412:16marshallthat is a production topology#2019-11-0412:16marshallWhat version did you launch from Marketplace?#2019-11-0412:17marshall@UNVU1Q6G1 Can you follow the directions here https://docs.datomic.com/cloud/operation/new-system.html
and launch a stack from our Releases page instead of from Marketplace
I will contact AWS today and look into getting the listing corrected#2019-11-0412:20marshall@UNVU1Q6G1 I believe this may be an issue with the “535-8812.1” release. Can you try the one without the .1 (just 535-8812) ?#2019-11-0318:21alidlorenzoso does that mean i should solo 1 despite the large estimate, since the actual amount will be reversed?#2019-11-0318:22marshallYes#2019-11-0318:22alidlorenzook, thanks for clarifying !#2019-11-0318:32joshkhi'm curious -- what makes the following two queries different enough to return empty vs. non-empty results?
; find entities whose :user/name attribute has a value of "user123"
(d/q '{:find [?e]
:in [$ ?v]
:where [[?e :user/name ?v]]}
db "user123")
=> [[12345678912345]]
; find entities with any attribute (unbound) that has a value of "user123"
(d/q '{:find [?e]
:in [$ ?v]
:where [[?e _ ?v]]}
db "user123")
=> []
#2019-11-0318:37favilaThat is pretty alarming#2019-11-0318:37favilaMaybe :user/name is not indexed?#2019-11-0318:38favilaNonetheless, my expectation is the second query would not be empty; it might be so slow it never terminates, but not empty#2019-11-0318:39joshkhis there a way to find out? perhaps it's just me? i can definitely reproduce it.#2019-11-0318:39favilaTo me this looks like a bug#2019-11-0318:40favilait’s a pathological case, you’d never want a query like [?e _ ?v] as the first clause with ?v bound, but it should still work#2019-11-0318:40favilaexperiment with binding ?a in various ways, see if it gives results#2019-11-0318:41joshkh> Nonetheless, my expectation is the second query would not be empty; it might be so slow it never terminates, but not empty
i did wonder if this would trigger a full-db scan alert, however an empty set (returned instantly) made me scratch my head.#2019-11-0318:41favila[0 :db.install/attribute ?a] [?e ?a ?v] or filter down further#2019-11-0318:41joshkhi did try binding ?a with the same result#2019-11-0318:42favila[?a :db/ident :user/name] [?e ?a ?v]?#2019-11-0318:42favilasame result meaning empty set?#2019-11-0318:43joshkh[?a :db/ident :user/name] [?e ?a ?v] works as expected#2019-11-0318:43joshkhsimply binding ?a (and not using it) returns an empty set#2019-11-0318:43favilaI meant forcing ?a to be bound to every attribute explicitly#2019-11-0318:45favilaIt could be the query planner refuses to even try to match by ?v if it doesn’t know ?a#2019-11-0318:45favilathere’s no index it can use effectively after all#2019-11-0318:45favilaI would still want an error not a silent empty set#2019-11-0318:45joshkhinteresting, this works!
(d/q '{:find [?e ?b]
:in [$ ?v]
:where [
[?a :db/ident ?b]
[?e ?a ?v]]}
(client/db) "user123")
=> [[12345678912345]]
#2019-11-0318:48joshkh(by the way, i would never use queries like these in production. this only stemmed from some hacky experimentations.)#2019-11-0318:50joshkhhowever, my concern is that an empty set can be dangerously misleading#2019-11-0408:31dmarjenburghFYI, I reproduced this with the same results and it’s not what I expected. Adding the [_ :db/ident ?a] or [?a :db/ident] clause works (and is considerably slower).#2019-11-0408:34dmarjenburghAdding a predicate clause [(= ?v ?name)] [?e _ ?name] will warn you of a full db scan#2019-11-0319:19joshkhalso curious -- is it normal to see what look like duplicate references in :db.alter/attribute?
{:db/id 0
:db.alter/attribute [#:db{:id 99, :ident :user/name}
#:db{:id 99, :ident :user/name}]}
#2019-11-0412:32marshallThat is an issue that was resolved in the most recent release#2019-11-0321:33alidlorenzohello i'm requesting datomic in a new boot app, and receiving a Could not locate datomic/client/impl/pro__init.class error. is there a way I can go about in resolving this?
here's my require code: (:require [datomic.client.api :as d])
here's my dependency: :dependencies '[[org.clojure/clojure "1.10.0"] [com.datomic/client-cloud "0.8.78"]]#2019-11-0321:40Jon Walchyour require and deps look fine to me#2019-11-0322:09alidlorenzofigured out it's not a dependency error it's a connection error#2019-11-0321:34Jon WalchI'm trying to read an edn file from my code base and then transact it, If I copy it verbatim and paste it in as the tx-data it works fine, however if I try to read it as a resource, slurp, and edn/read-string it, I get the following when I try to transact it
.Exceptions$IllegalArgumentExceptionInfo
:message :db.error/not-a-data-function Unable to resolve data function: #:db{:doc "User first name", :ident :user/first-name, :valueType :db.type/string, :cardinality :db.cardinality/one}
:data #:db{:error :db.error/not-a-data-function}
:at [datomic.error$arg invokeStatic error.clj 57]}]
I think its because edn/read-str is using the Map namespace syntax (https://clojure.org/reference/reader#_maps), is there a way to force it not to?#2019-11-0322:19Jon WalchI'm just going to declare my txs in code instead of in edn#2019-11-0322:58Alex Miller (Clojure team)whether you use that syntax, or not, the map in memory is identical#2019-11-0323:00Alex Miller (Clojure team)the error makes it sound like you've got an attribute definition where datomic expects a function, which seems like something else#2019-11-0323:39alidlorenzoif we're using datomic cloud, what setup is recommended for dev and staging?
i'd rather not create two more cloud instances, so it OK to use datomic free for these scenarios?#2019-11-0402:23Jon WalchYou can but the API is different#2019-11-0402:23Jon WalchI tried fiddling with https://github.com/ComputeSoftware/datomic-client-memdb/ but it didn't work quite right for me#2019-11-0402:44alidlorenzoso is the expected solution to create/pay for separate cloud instances?#2019-11-0402:44alidlorenzoor I guess wrap both cloud/on-premise APIs to make them the same#2019-11-0414:32faviladatomic cloud does not have a local-dev story#2019-11-0414:32favilayou are expected to have something running in the cloud, even for test runners#2019-11-0415:25kenny@UNVU1Q6G1 What did work?
@UPH6EL9DH We use datomic-client-memdb for running unit tests & gen tests on CI. We also have a dev system always running which lets you connect locally to run integration tests. The Datomic client for this dev system is created by specifying a "prefix" which will get added to all DBs created. We just implemented a simple wrapper around datomic.client.api.protocols/Client.#2019-11-0418:36Jon Walch@U083D6HK9 It was working perfectly for transacting. When I was trying to do (d/db conn) where conn is a datomic-client-memdb LocalConnection, it was telling me that the type couldn't be cast. Let me see if I can repro#2019-11-0418:40Jon Walchjava.lang.ClassCastException: class compute.datomic_client_memdb.core.LocalConnection cannot be cast to class datomic.Connection (compute.datomic_client_memdb.core.LocalConnection is in unnamed module of loader clojure.lang.DynamicClassLoader @1e420b95; datomic.Connection is in unnamed module of loader 'app'
#2019-11-0418:55kennyCan you send the full code you’re using to do that @UNVU1Q6G1 ?#2019-11-0419:27Jon WalchSpoke with @U083D6HK9 in a DM, it was user error on my part 😄#2019-11-0519:26alidlorenzo@U083D6HK9 so to be clear you're not running two cloud instances, just one instance but for dev you prefix all databases created?#2019-11-0519:29kenny@UPH6EL9DH yes — one instance with multiple devs using the same instance. Each dev makes their own prefix. #2019-11-0519:33alidlorenzo@U083D6HK9 would you be able to share some of the wrapper code? 🙂 also, even with a prefix, are you not concerned at all about mixing production and dev database?#2019-11-0519:36kennyI can see how coupled to other code our wrapper is when I get back in front of a computer. Might be able to paste some code here.
Oh, I guess we run two systems then. One for production. Dev and QA environments both use the single Datomic dev system. #2019-11-0519:39alidlorenzo@U083D6HK9 that'd be great thanks; and yea, that seems like best solution, but for a side project jumps cost for 30$ monthly to 60$ which can get pretty steep#2019-11-0519:44kennyI think you’d honestly be fine running prod and dev on the same system. Make it so prod uses no prefix. #2019-11-0519:45kennyWe run separate topologies so we get high availability. Dev is just running solo. #2019-11-0520:49kenny@UPH6EL9DH It's essentially this:
(defrecord DatomicClient [client db-prefix return-anomalies?]
datomic-protos/Client
(list-databases [_ arg-map]
(let [dbs (d/list-databases client arg-map)]
(into (list)
(comp
(filter (fn [db-name]
(if db-prefix
(str/starts-with? db-name (db-prefix-str db-prefix))
(not (str/starts-with? db-name "__")))))
(map (fn [db-name]
(str/replace-first db-name (db-prefix-str db-prefix) ""))))
dbs)))
(connect [_ arg-map]
(d/connect client {:db-name (prefix-db-name db-prefix (:db-name arg-map))}))
(create-database [_ arg-map]
(d/create-database client {:db-name (prefix-db-name db-prefix (:db-name arg-map))}))
(delete-database [_ arg-map]
(d/delete-database client {:db-name (prefix-db-name db-prefix (:db-name arg-map))})))
#2019-11-0523:19alidlorenzo@U083D6HK9 great, thanks for sharing :+1:#2019-11-0323:56vnctaingI’ve some issue installing Datomic Pro Starter Edition
I created a file ~/.lein/credentials.clj
{#"my\.datomic\.com" {:username "…."
:password "…."}}
then generated ~/.lein/credentials.clj.gpg
gpg --default-recipient-self -e ~/.lein/credentials.clj > ~/.lein/credentials.clj.gpg
added to my project.clj
:repositories {"" {:url ""
:creds :gpg}}
:dependencies [[com.datomic/client-pro "0.9.5927"]]
but when i run lein deps I get
Could not find artifact com.datomic:client-pro:jar:0.9.5927 in central ()
Could not find artifact com.datomic:client-pro:jar:0.9.5927 in clojars ()
Could not find artifact com.datomic:client-pro:jar:0.9.5927 in ()
This could be due to a typo in :dependencies, file system permissions, or network issues.
If you are behind a proxy, try setting the 'http_proxy' environment variable.
#2019-11-0412:33marshallthe client-pro version is not the same as the datomic version#2019-11-0412:34marshallclient library is in Maven: https://search.maven.org/search?q=a:client-pro%26
latest version is 0.9.37#2019-11-0412:34marshallhttps://search.maven.org/artifact/com.datomic/client-pro/0.9.37/jar#2019-11-0402:25Jon WalchAnyone have this issue with starting up a new Solo 1? I didn't have this problem with Production 1.
fatal error: An error occurred (404) when calling the HeadObject operation: Key "<system-name>/datomic/access/private-keys/bastion" does not exist
Unable to read bastion key, make sure your AWS creds are correct.
I'm logged in and followed the instructions here https://docs.datomic.com/cloud/getting-started/configuring-access.html#2019-11-0402:25Jon WalchAnyone have this issue with starting up a new Solo 1? I didn't have this problem with Production 1.
fatal error: An error occurred (404) when calling the HeadObject operation: Key "<system-name>/datomic/access/private-keys/bastion" does not exist
Unable to read bastion key, make sure your AWS creds are correct.
I'm logged in and followed the instructions here https://docs.datomic.com/cloud/getting-started/configuring-access.html#2019-11-0412:35marshallwhere do you get this error? when trying to connect your access gateway proxy?#2019-11-0419:28Jon Walchyeah when trying to run the datomic-socks-proxy script#2019-11-0420:11marshalli would guess your AWS creds are not right in that environment#2019-11-0420:11marshalloh sorry#2019-11-0420:12marshallyeah, if the proxy wont start at all it generally is due to AWS credentials#2019-11-0420:12marshallhave you sourced them in that env? and/or set up AWS profiles?#2019-11-0601:14Jon WalchI'm pretty positive that my creds are fine because I can run other aws commands without issue#2019-11-0601:15Jon WalchI also configured the inbound rule on the security policy, and attached the relevant security policies to my user group#2019-11-0602:40Jon Walchno idea what i did differently this time besides name the system something different, but its working now#2019-11-0403:43onetomis it recommended to name card-many attributes as plural?
if not, why not?#2019-11-0403:43onetomis there a place where i can see well designed examples of datomic schemas?#2019-11-0407:46tatutIn datomic cloud, where can I see the cast/dev messages, I'm not seeing any messages in cloudwatch#2019-11-0408:41tatutoh, I see they are not logged https://forum.datomic.com/t/logging-from-an-ion/954#2019-11-0417:48BrianI'm working with Datomic Cloud and have wired up an ion to Lambda to API Gateway which is secured through Cognito to require a user token to access. Next I want to know who is using my ion. Parsing the context I find this: {:clientContext nil :identity {:identityId "" :identityPoolId ""}} (among other things). I expected this information to reflect my user or give some sore of indication as to who was using my ion. Can anyone help me understand how/why this information is not present and how I might get it?#2019-11-0502:29Msr TimHello, I followed ions tutorial described here#2019-11-0502:29Msr Timhttps://docs.datomic.com/cloud/ions/ions-tutorial.html#2019-11-0502:29Msr Tim aws lambda invoke --function-name $(GROUP)-get-items-by-type --payload \"hat\" /dev/stdout #2019-11-0502:30Msr Timthis worked as expected but when i setup api gateway and did a curl i get the following#2019-11-0502:30Msr Timcurl https://{URL}/dev/datomic -d :hat
I3t7OmNvbG9yIDpncmVlbiwgOnR5cGUgOmhhdCwgOnNpemUgOm1lZGl1bSwgOnNrdSAiU0tVLTIzIn0KICB7OmNvbG9yIDpyZWQsIDp0eXBlIDpoYXQsIDpzaXplIDpzbWFsbCwgOnNrdSAiU0tVLTMifQogIHs6Y29sb3IgOmdyZWVuLCA6dHlwZSA6aGF0LCA6c2l6ZSA6eGxhcmdlLCA6c2t1ICJTS1UtMzEifQogIHs6Y29sb3IgOnJlZCwgOnR5cGUgOmhhdCwgOnNpemUgOnhsYXJnZSwgOnNrdSAiU0tVLTE1In0KICB7OmNvbG9yIDpncmVlbiwgOnR5cGUgOmhhdCwgOnNpemUgOmxhcmdlLCA6c2t1ICJTS1UtMjcifQogIHs6Y29sb3IgOnllbGxvdywgOnR5cGUgOmhhdCwgOnNpemUgOmxhcmdlLCA6c2t1ICJTS1UtNTkifQogIHs6Y29sb3IgOnllbGxvdywgOnR5cGUgOmhhdCwgOnNpemUgOm1lZGl1bSwgOnNrdSAiU0tVLTU1In0KICB7OmNvbG9yIDp5ZWxsb3csIDp0eXBlIDpoYXQsIDpzaXplIDp4bGFyZ2UsIDpza3UgIlNLVS02MyJ9CiAgezpjb2xvciA6Ymx1ZSwgOnR5cGUgOmhhdCwgOnNpemUgOm1lZGl1bSwgOnNrdSAiU0tVLTM5In0KICB7OmNvbG9yIDpyZWQsIDp0eXBlIDpoYXQsIDpzaXplIDpsYXJnZSwgOnNrdSAiU0tVLTExIn0KICB7OmNvbG9yIDpncmVlbiwgOnR5cGUgOmhhdCwgOnNpemUgOnNtYWxsLCA6c2t1ICJTS1UtMTkifQogIHs6Y29sb3IgOmJsdWUsIDp0eXBlIDpoYXQsIDpzaXplIDpsYXJnZSwgOnNrdSAiU0tVLTQzIn0KICB7OmNvbG9yIDpyZWQsIDp0eXBlIDpoYXQsIDpzaXplIDptZWRpdW0sIDpza3UgIlNLVS03In0KICB7OmNvbG9yIDp5ZWxsb3csIDp0eXBlIDpoYXQsIDpzaXplIDpzbWFsbCwgOnNrdSAiU0tVLTUxIn0KICB7OmNvbG9yIDpyZWQsIDp0eXBlIDpoYXQsIDpzaXplIDpzbWFsbCwgOnNrdSAiU0tVLTEyMzQ1In0KICB7OmNvbG9yIDpibHVlLCA6dHlwZSA6aGF0LCA6c2l6ZSA6eGxhcmdlLCA6c2t1ICJTS1UtNDcifQogIHs6Y29sb3IgOmJsdWUsIDp0eXBlIDpoYXQsIDpzaXplIDpzbWFsbCwgOnNrdSAiU0tVLTM1In19Cg
#2019-11-0509:49onetom@meowlicious99 if i dedcode that response it seems legit (aside from the missing == from the end of the data):
$ (pbpaste; echo ==) | base64 -d
#{{:color :green, :type :hat, :size :medium, :sku "SKU-23"}
{:color :red, :type :hat, :size :small, :sku "SKU-3"}
{:color :green, :type :hat, :size :xlarge, :sku "SKU-31"}
{:color :red, :type :hat, :size :xlarge, :sku "SKU-15"}
{:color :green, :type :hat, :size :large, :sku "SKU-27"}
{:color :yellow, :type :hat, :size :large, :sku "SKU-59"}
{:color :yellow, :type :hat, :size :medium, :sku "SKU-55"}
{:color :yellow, :type :hat, :size :xlarge, :sku "SKU-63"}
{:color :blue, :type :hat, :size :medium, :sku "SKU-39"}
{:color :red, :type :hat, :size :large, :sku "SKU-11"}
{:color :green, :type :hat, :size :small, :sku "SKU-19"}
{:color :blue, :type :hat, :size :large, :sku "SKU-43"}
{:color :red, :type :hat, :size :medium, :sku "SKU-7"}
{:color :yellow, :type :hat, :size :small, :sku "SKU-51"}
{:color :red, :type :hat, :size :small, :sku "SKU-12345"}
{:color :blue, :type :hat, :size :xlarge, :sku "SKU-47"}
{:color :blue, :type :hat, :size :small, :sku "SKU-35"}}
#2019-11-0513:49danierouxIn Datomic Cloud, how do we copy a database to a new database? Naively transacting the datoms from the tx-range fails because the entity ids do not match#2019-11-0514:09bartukaI am experiencing an odd behavior when I need to connect to datomic cloud. Oftren I receive an error :cognitect.anomalies/unavailable "connection refused". however, I just wait and execute the mount/start command that is managing my connection with datomic and everything works fine.#2019-11-0521:02ssdevHey folks. This is a potentially dumb question but, as I'm going through the ion tutorial, I'm noticing that the web service ion section results in an api gateway endpoint that ends with /datomic. Is /datomic always necessary at the end of the url? If not, how do I get rid of that?#2019-11-0603:28Jon WalchAnyone run into this? This is what happens when my application (running in EKS) tries to connect to my datomic cloud. Datomic cloud is working fine, tested it with the proxy. I already double checked that the EndpointAddress is correct in my cloud formation stack
{:type clojure.lang.ExceptionInfo
:message Unable to connect to .<stack_name>.
:data {:cognitect.anomalies/category :cognitect.anomalies/not-found, :cognitect.anomalies/message entry.<stack-name>.: Name does not resolve, :config {:server-type :cloud, :region us-west-2, :system <system-name>, :endpoint .<stack-name>., :endpoint-map {:headers {host entry.<stack-name>.}, :scheme http, :server-name entry.<stack-name>., :server-port 8182}}}
:at [datomic.client.impl.cloud$get_s3_auth_path invokeStatic cloud.clj 178]}]
#2019-11-0603:40Jon Walchgoing to try peering my VPCs#2019-11-0608:23cjmurphyUsing on-prem I'm looking to generate a tempid then use it to find the real-eid from the :tempids map that is returned from transact!. A generated tempid looks like {:part :db.part/user, :idx -1000305}. It would make sense to me if instead it was a negative number of 19 digits length, because the :tempids map keys are all 19 digit negatives. Can someone help with the error in my understanding? Thx.#2019-11-0609:34onetomyou can use strings as tempids even with the on-prem version of datomic.
is there any specific reason for using the d/tempid function?#2019-11-0609:48cjmurphyErr no - I somehow must have just come across it and thought that was the way to generate 'the next tempid'. I can just use gensym I guess. How do people normally generate tempid strings?#2019-11-0609:55onetomi have the feeling that u might not even need to generate if you don't care about what the tempids are.#2019-11-0609:57onetomit's not necessary to explicitly specify a tempid anymore,
UNLESS you want to reference a newly transacted (or modified) entity in another fact/entity within the same txn#2019-11-0609:58onetomalso, tempid strings only have to be uniq within a transaction, so you can simply number them with range#2019-11-0610:01onetommaybe if u can share more specifics about your use-case, then we can help easier.
im working on some tsv import code now and trying to write tests for it.
it looks something like this:
(defn mk-rule [n]
(let [rule-expr (str "rule-" (if (keyword? n) (name n) n))]
{:db/ident (keyword rule-expr)
:rule/algo :regex
:rule/expr rule-expr}))
(deftest re-import-rules
(testing "remove rule"
(tx [(mk-rule :to-be-deleted)
(mk-rule :unchanged)])
(is (= #{:rule-to-be-deleted
:rule-unchanged}
(q '[:find (set ?r-ident) .
:where
[?r :rule/algo+expr]
[(datomic.api/ident $ ?r) ?r-ident]])))))
#2019-11-0610:04onetomnote how i just made up a convention of identifying my temporary rule entities with a rule- prefix
in another test, i just made up a bunch of rules and simply numbered, like this:
(tx (map mk-rule (range 10)))
then i could create an entity referencing them, like this:
(tx [{:txn/id 1
:txn/matching-rule [:rule-2 :rule-3]}])
im using db/idents here, so i don't have to fuss around with tempid resolution, since im working on an in-memory db, but the naming principle is the same...#2019-11-0610:06onetomalso, if u use nested entity maps in tx-data, then the assignment of the nested entity ids to the containing entity's ref attribute is done automatically by the d/transact logic#2019-11-0610:07cjmurphyThis is a Fulcro application. The idea is that there are fulcro-tempids on client. They get sent to the server. The idea is to generate datomic tempids to go with them as pairs in a map. (key will be Fulcro, val will be datomic tempid). After transact! get two maps. Can use them to get a map of fulcro-tempid -> real-eid. The client can then use that map to do the remapping of the client state.#2019-11-0610:07onetomi've also noticed that u were talking about transact!.
that's an old function, if i understood correctly.
the current https://docs.datomic.com/on-prem/clojure/index.html documentation doesn't even mention it anymore. it just simply uses transact#2019-11-0610:08cjmurphyI'm using an old version of Datomic.#2019-11-0610:08onetomand upgrading is not an option?#2019-11-0610:09cjmurphy0.9.5703#2019-11-0610:09onetombecause writing more code which is not necessary when using newer datomic feels like unnecessary pain#2019-11-0610:10cjmurphyWell the upgrading ability ran out.#2019-11-0610:10cjmurphyOnly lasts for a year.#2019-11-0610:11onetomthat seems like a recent enough version though to support string tempids and transact without a bang#2019-11-0610:11cjmurphyYes I'll start using transact now I know.#2019-11-0610:12cjmurphyi.e. today.#2019-11-0610:12onetom(i also just noticed this change a few days ago, when coming back to datomic after 2-3 years ;)#2019-11-0610:13cjmurphySo I should just generate negative number and str them?#2019-11-0610:13cjmurphyYeah I noticed it but ignored it!#2019-11-0610:14onetomso it sounds like you dont need a fulcro-tempid -> datomic-tempid because the fulcro-tempid can be just a string and u can use that directly in your tx-data#2019-11-0610:15cjmurphyYes that's what I thought too, as long as it is a string, which I can convert it to if its not.#2019-11-0610:16cjmurphy#2019-11-0610:17cjmurphySo I might just use the random-uuid from in there.#2019-11-0610:18onetomisn't something like (->> tx-data (d/transact conn) :tempids vals) enough?#2019-11-0610:18cjmurphyThat gives me the real-eids.#2019-11-0610:18onetomwhat else is associated to the "fulcro-tempids" on the client side?#2019-11-0610:19cjmurphyWell back on the client, in client state, there are client tempids (yes "fulcro-tempids").#2019-11-0610:19onetomaren't those already some uniq strings?#2019-11-0610:19onetombecause it sounds like you can just use those directly as the datomic tempids#2019-11-0610:19cjmurphyFulcro can change them to real ids, but needs the map that can do that.#2019-11-0610:20cjmurphyThey are from that function above.#2019-11-0610:20cjmurphySo for each one of them (a TempId) there needs to be a val which is a real-eid.#2019-11-0610:23onetomand what is that TempId?
which namespace is it from for example?
but i guess i can't add more to this topic now.
i have to get back to work too.#2019-11-0610:24cjmurphyThe problem is already solved in my mind, doing as you say, using 'fulcro-tempid' as the tempids to datomic. String conversion not really an issue.#2019-11-0610:24cjmurphyThank you very much.#2019-11-0610:27cjmurphyhttps://github.com/fulcrologic/fulcro/blob/develop/src/main/com/fulcrologic/fulcro/algorithms/tempid.cljc#2019-11-0615:45BrianUsing Datomic Cloud I have an entity with a :tags attribute with a :db.type/keyword with :cardinality/many. The allowed keywords are :a :b :c :d :e and any combination of those keywords is allowed as the value of the :tags attribute.
Now I want to update the value of :tags with a new combination of those keywords. How can I say "remove current values of :tags and add these new values"?#2019-11-0615:46BrianI am flexible on the schema so if a structural change makes sense, I can do that#2019-11-0615:48BrianI could d/pull on the entity to pull back it's tags and then retract them one by one but that seems like the wrong way#2019-11-0615:49ghadiyou want it to be atomic -- if you read then transact you'll have a race @brian.rogers#2019-11-0615:49ghadithere are a few patterns for handling this: install a transaction function is one#2019-11-0615:52ghadianother possibility is to avoid the race is to do a CAS then retry https://docs.datomic.com/cloud/best.html#optimistic-concurrency#2019-11-0615:53ghadiyou'll need to add an attribute that you can CAS upon#2019-11-0615:53ghadi:tags/version 4
then send in [:db/cas entity :tags/version 4 5] alongside your asserts+retracts#2019-11-0615:55BrianThank you @ghadi! That gives me exactly what I needed to think about 😃#2019-11-0618:55Jon Walch@marshall Is this documentation still up to date? https://docs.datomic.com/cloud/operation/client-applications.html#create-endpoint I don't see "LoadBalancerName" in my Datomic CloudFormation Output section. I'm using a Solo topology. Look like one can't connect using this method for a solo topology. Do I have to do VPC peering for solo?#2019-11-0619:37Jon WalchAccessing Datomic from a separate VPC in versions older than 388 only can be achieved with VPC Peering Well I'm on the latest version so this seems out of the question too. What am I supposed to do?#2019-11-0619:52Jon WalchJust tried adding my EKS VPC to the private datomic route 53 hosted zone, no dice on that either#2019-11-0620:12marshallYou could run the SOCKS proxy in your EKS vpc#2019-11-0620:30Jon WalchThanks! I think I'm just going to go with Production#2019-11-0622:32Jon WalchDoes anyone know where VpcEndpointDns is? https://docs.datomic.com/cloud/operation/client-applications.html#2019-11-0711:36bartukahi, what is the appropriate way to perform setup & teardown of datomic databases when using datomic cloud? I am using mount and creating a {:db-name "my-db-test"} before testing facts with midje and when it finishes I have a function call to d/delete-database which seems perfect fine for me. However, very often I got an error in the subsequent tests saying:
#error {
repl_1 | :cause :db.error/db-deleted 463c4ecf-0733-4afd-a41a-16449265372a has been deleted
repl_1 | :data {:datomic.client-spi/context-id cc5b3341-b4c8-40e9-8cb6-2c7b1fec2f4d, :cognitect.anomalies/category :cognitect.anomalies/fault, :cognitect.anomalies/message :db.error/db-deleted 463c4ecf-0733-4afd-a41a-16449265372a has been deleted, :dbs [{:database-id 463c4ecf-0733-4afd-a41a-16449265372a, :t 108, :next-t 109, :history false}]}
#2019-11-0711:36bartukaI have a retry logic for that, but it seems not alright and very often the max-retries is reached.#2019-11-0716:25ghadiuse ephemeral names for your database -- don't delete and recreate a db with the same name everytime#2019-11-0717:23bartukayes, I just did that and worked out ok! Thanks!#2019-11-0716:24dmarjenburghWe are hitting the limit of the 4kb bytes per string value in datomic. What are the limitations/consequences of transacting datoms with, say an 8kb string?#2019-11-0716:26dmarjenburghBy hitting the limit, I mean our users really want to store bigger text fields. I’m trying not to have to build something that splits the string it and combines it when querying. We already treat the string as opague (it’s gzipped and base64Encoded before it goes into datomic)#2019-11-0716:33ghadi@dmarjenburgh since you already treat it as opaque, it wouldn't be a big stretch to store it elsewhere#2019-11-0716:34ghadi[:db/add e a (content-hash text)]
#2019-11-0716:34ghadithen store the text somewhere else, keyed by content-hash#2019-11-0716:52dmarjenburghI'm trying to avoid that to keep latency down and the application simpler. I'm wondering why the limit exists.#2019-11-0717:34marshallDatomic is not a BLOB store and does not support storing large opaque objects in datoms
We understand this use case and are considering options, but for now, the suggestion from @U050ECB92 to store them out of band is definitely the best aproach#2019-11-0722:02henrikWe’ve reached for DynamoDB for smaller stuff, and S3 for file-sized things, and it’s worked out OK. DDB adds something like 10ms on top, worst case. Not excellent, but good enough for our use case.
Of course, you miss out on the automatic disk/in-memory caching that Datomic otherwise handles, and may end up hitting DDB quite a lot unless you explicitly handle it in some custom manner.#2019-11-0812:15dmarjenburghI understand it’s not blob store and we already use s3 for binary data with a reference in datomic. I’m also not saying there shouldn’t be a limit, I’m just trying to understand why the limit is set at 4kb and what the tradeoffs are for storing text that happens to be a bit larger.
As it stands datomic will actually happily allow larger strings (I’ve tried strings up to 16kb) and the transaction succeeds and can retrieve the values back seemingly without issue. I see 2 obvious cons:
- You can cache fewer datoms in memory
- Queries filtering against the large string value will be slower.
Maybe I’m missing something else. DynamoDB allows transactions up to 10MB so I don’t view that as the limit.
I need to weight this against the cost of storing string of, say 8kb somewhere else from an business/development perspective. Introduced complexity in the application logic, losing transactionality, adding latency and costing development time. I hope you understand where I’m coming from#2019-11-0717:20calebpIf we are already subscribed to Datomic Cloud, do we need to go through the marketplace interface to create a new system? Can we just create the system directly in Cloud Formation using the appropriate templates?#2019-11-0717:23calebpLooks like yes according to the first line here https://docs.datomic.com/cloud/getting-started/start-system.html#2019-11-0717:33marshallYes, you can absolutely get the templates directly from our releases page and launch that way @calebp https://docs.datomic.com/cloud/operation/new-system.html#2019-11-0717:34calebpThanks @marshall. That makes life easier#2019-11-0717:49Luke Schubertdo rules with multiple definitions evaluate in order?#2019-11-0717:49Luke Schubertor phrased differently, do they short circuit like ORs?#2019-11-0718:55hiredmanThey are not ors, I don't know the internals, but it is like a logic program, both branches are taken#2019-11-0718:56Luke Schubertthanks#2019-11-0804:25Jon WalchAnyone seen this one before? I attached a Service Account to my EKS Cluster with S3ReadOnlyPerms, so it should be able to access it. If I'm using a VPC Endpoint to connect to my Datomic VPC, which VPS Endpoint DNS name am I supposed to use?
{:type clojure.lang.ExceptionInfo
:message Forbidden to read keyfile at s3://<redacted>/datomic/access/admin/.keys. Make sure that your endpoint is correct, and that your ambient AWS credentials allow you to GetObject on the keyfile.
:data {:cognitect.anomalies/category :cognitect.anomalies/forbidden, :cognitect.anomalies/message Forbidden to read keyfile at s3://<redacted>/datomic/access/admin/.keys. Make sure that your endpoint is correct, and that your ambient AWS credentials allow you to GetObject on the keyfile.
#2019-11-0804:27Jon WalchDoes my application need more than S3ReadOnly to access that keyfile?#2019-11-0804:28Jon WalchUsing the VPC Endpoint
You must use the VPC Endpoint DNS (or Route53 entry if you created one) and port 8182 for the :endpoint parameter in your Datomic client configuration when connecting from your VPC:
(def cfg {:server-type :ion
:region "<your AWS Region>" ;; e.g. us-east-1
:system "<system-name>"
:endpoint "http://<VpcEndpointDns>:8182"})
The endpoint DNS name can be found in the Outputs of the VPC Endpoint CloudFormation Stack under the VpcEndpointDns key.
#2019-11-0804:28Jon WalchVpcEndpointDns no longer exists#2019-11-0806:20onetomis there some concise idiom for replacing eids in :tx-data returned by d/transact or d/with, so we can see attribute idents at least?#2019-11-0806:35onetomsomething like
(->> (d/with (d/db conn) [])
((fn [{:keys [db-after tx-data]}]
(map (fn [datom]
(map #(or (d/ident db-after %) %)
((juxt :e :a :v :tx :added) datom)))
tx-data))))
=> ((13194140516868 :db/txInstant #inst"2019-11-08T06:34:31.229-00:00" 13194140516868 true))
#2019-11-0819:14pvillegas12I’m getting
http-endpoint fail failed
"Type": "java.lang.IllegalStateException",
"Message": "AsyncContext completed and/or Request lifecycle recycled",
"At": [
"org.eclipse.jetty.server.AsyncContextState",
"state",
"AsyncContextState.java",
54
]
in my datomic logs#2019-11-0819:30pvillegas12Is entity/index a reserved keyword in datomic for schema?#2019-11-0822:56BrianShould be a simple question if someone can answer it regarding transaction functions.
I'm making this call which uses a transaction function update-tags-tx:
(let [hash (ffirst hashes)
hash-type (second (first hashes))
tx ['(update-tags-tx hash hash-type tags)]]
(d/transact
conn
{:tx-data tx}))
`
But getting Unable to resolve entity: hash-type which to me means that's not being evaluated due to the ' which makes sense. In the docs (https://docs.datomic.com/cloud/transactions/transaction-functions.html#calling) they use raw values. How can I do this with variables?#2019-11-0823:07ghadiQuote the symbol alone instead of the whole list#2019-11-0920:36pvillegas12Getting No implementation of method: :-event of protocol: #'datomic.ion.cast.impl/Cast found for class: nil from my ion in development#2019-11-0920:36pvillegas12Upgraded to 0.9.234 - ion-dev, has anybody else seen this? Doing a regular (cast/event {:msg "MyEvent" ::data {...}})#2019-11-0920:38pvillegas12Doing (require '[datomic.ion.cast :as cast])#2019-11-0920:51pvillegas12Using 0.9.34 ion#2019-11-1010:25erikhttps://www.dcc.fc.up.pt/~ricroc/homepage/publications/leap/2013-WFLP.pdf
> A Datalog Engine for GPUs
> Abstract. We present the design and evaluation of a Datalog engine for execution in Graphics Processing Units (GPUs). The engine eval- uates recursive and non-recursive Datalog queries using a bottom-up approach based on typical relational operators. It includes a memory management scheme that automatically swaps data between memory in the host platform (a multicore) and memory in the GPU in order to reduce the number of memory transfers.
> To evaluate the performance of the engine, three Datalog queries were run on the engine and on a single CPU in the multicore host. One query runs up to 200 times faster on the (GPU) engine than on the CPU.
any likelihood this will ever be relevant to Datomic?#2019-11-1015:10cjmurphyIs it a good idea or even possible for entity attributes to have names that are integrated with spec? So something like com.some-company-name.bank-statement/line-item rather than bank-statement/line-item. Is there already documentation/discussion on this?#2019-11-1016:46ghadihttps://docs.datomic.com/cloud/schema/schema-reference.html#attribute-predicates
https://docs.datomic.com/cloud/schema/schema-reference.html#entity-specs
@cjmurphy #2019-11-1016:47ghadi(Yes it is a good idea)#2019-11-1016:56cjmurphyThanks. In that documentation I see :user/name, but never :i.am.a.spec.user/name, or ::user/name. That's what was confusing me.#2019-11-1016:58cjmurphyWhat I was thinking about was not using any special feature of Datomic, just having spec kind of namespaces.#2019-11-1016:59ghadiAny of those kws are fine to register as names of specs. I would choose one name and be consistent#2019-11-1016:59ghadiYeah you can use specs to validate transaction data payloads without using those features above#2019-11-1017:46cjmurphyThanks @U050ECB92, am using the long form of namespaces now, but always with :: in the code, including in pull syntax. Only this long form can be validated by spec - that was my motivation (for others reading this).#2019-11-1017:51cjmurphyAs part of doing this I'm creating namespaces (i.e. files) that serve no purpose other than to be used in :require. Feels like going a bit off the beaten path to be doing this, hence I was looking for some confidence boosting validation 🙂#2019-11-1022:19ssdevIs it possible to export a datomic database in datomic cloud? I see how to do it with on prem version, but can't find how with cloud version#2019-11-1104:57onetomiirc a few days ago someone here said it's on of the drawbacks of the cloud version of datomic that there is no way to export it (and then import it into an on-prem datomic setup)
can't remember though whether his statement was refuted or not.#2019-11-1213:20erikbtw is it not possible to write a straightforward Clojure script to inspect the schema in the cloud DB and generate the import-export code?#2019-11-1409:45onetomno idea. im only familiar with on-prem so far#2019-11-1022:47pvillegas120.9.34 ion is broken for (cast/event ...), had to downgrade to 0.9.28 ion#2019-11-1114:05bartukahi, when should I use the async api?#2019-11-1115:07dangercodernon-blocking applications:
https://blog.codecentric.de/en/2019/04/explain-non-blocking-i-o-like-im-five/ @iagwanderson#2019-11-1118:35bartukathanks, very good reading!#2019-11-1119:13dangercoderyou're welcome 🙂#2019-11-1119:39bartukanot sure, but I am trying to write a service using rabbitmq and core.async. I am handling backpressure and parallelism nicely but when I introduce parts of the code with blocking I/O the execution freezex#2019-11-1119:40bartukaI am trying to make 30 find queries into datomic cloud simultaneously. I get a Client Timeout out of this#2019-11-1119:42bartukathis behavior is expected?#2019-11-1200:31ssdevAnyone know why if I try to use d/entity I get an error No such var: d/entity?#2019-11-1200:33Alex Miller (Clojure team)there is no entity in the Client API, maybe you're using that?#2019-11-1200:35ssdevoh. yes I am. so, I would need to just use datomic.api I suppose?#2019-11-1200:35Alex Miller (Clojure team)kind of depends what you're trying to do#2019-11-1200:36Alex Miller (Clojure team)are you using on-prem or cloud?#2019-11-1200:37ssdevyeah so, clearly I'm a noob here. I'm trying to get up to speed on datomic cloud. I've managed to create a schema that creates users with first name, last name, user name, email address, and some settings. I'm trying to query for all the settings of a specific user, but what I get back looks like it's the entity id.#2019-11-1200:38ssdevSo I was trying to get that setting based on the entity id. But perhaps I'm way off in trying to do that#2019-11-1200:40ssdevI was running this query and hoping to get back the actual settings for a user -
(d/q '[:find ?settings
:where [?user :user/username ?username]
[?user :user/settings ?settings]]
db "myusername")
instead I get back [[79] [80]]#2019-11-1200:46Alex Miller (Clojure team)yes, the entity api is only available for peers in Datomic On-prem, so that won't be available. I would recommend looking at the Pull API instead https://docs.datomic.com/cloud/query/query-pull.html#2019-11-1200:47Alex Miller (Clojure team)that will let you pull back the data for the selected users in the shape you want#2019-11-1200:55ssdevOk thanks. I'm curious when one should use pull instead of query. Is pull just more commonly used for retrieving multiple nested values?#2019-11-1201:27Alex Miller (Clojure team)querying will primarily give you back tuples - if that's good, then use that. if you instead wanted more map-like nested data, then use pull#2019-11-1201:27Alex Miller (Clojure team)and of course you can use them together! which is kind of shown on that page#2019-11-1201:45ssdevOk cool. Thanks#2019-11-1214:34thumbnailI noticed that arguments of db fns in our peer server are Java types, where i'd have expected clojure datastructures. Is this deliberate?#2019-11-1214:35favilaExample? You mean like ArrayList instead of Vector when returning from an aggregating query?#2019-11-1214:35favilaor do you mean something else?#2019-11-1214:35thumbnailExactly that#2019-11-1214:36favilaI think it’s done for efficiency only#2019-11-1214:36favilaIt’s probably implemented with r/cat#2019-11-1214:36favila(which uses ArrayList underneath)#2019-11-1214:36thumbnailIt is also happening for the arguments that are passed into the db-fn.#2019-11-1214:40favilais this client api?#2019-11-1214:41favilaor transaction fn?#2019-11-1214:41favilawhat’s the context?#2019-11-1214:41favilaI know seqs sometimes get decoded out of fressian as arraylists#2019-11-1214:42favilabut if the query is running using peer api, your inputs are in-process. nothing is getting coerced#2019-11-1214:42favilaso in that case it’s likely you really are passing in what you get#2019-11-1214:50thumbnailIt’s in regards of a transaction function. So i transact a db/fn into the peer (i think?) which accepts an argument.
When i use the client api to invoke that function in a transaction, and check the type of the argument it’s the java equivalent. so a java.util.Arrays$ArrayList for example.#2019-11-1214:53favilaIt’s the “in a transaction” part#2019-11-1214:53favilayour input was serialized and sent to the transactor#2019-11-1214:53favilathe function is running on the transactor#2019-11-1214:54favilaso a side effect of the serialization/deserialization was a change in type#2019-11-1214:54thumbnailYes, i figured that was the reason. I’m just curious whether this is considered a bug or is deliberate#2019-11-1214:55thumbnailIt caused some confusion on our staging env because our dev-setup uses https://github.com/ComputeSoftware/datomic-client-memdb, which doesn’t have the same side effect#2019-11-1214:57favilaNot sure how deliberate it is, but the defaults for fressian are very lazy about preserving types exactly#2019-11-1214:57favilapretty much anything sequential will pop out as an arraylist#2019-11-1215:05thumbnailHmmm, will keep it in mind then. Is there any way to get the types so that they’ll work properly with clj or should i encode the data myself in that case?#2019-11-1215:14favilayou don’t have control over this AFAIK#2019-11-1215:14Alex Miller (Clojure team)you should be careful with only relying on what's documented in the apis and not necessarily expect any particular concrete types. Java types are used because Datomic apis can be used from other jvm langs (Java, Groovy, etc)#2019-11-1217:03BrianIs this valid data for a Datomic Cloud transaction function to return?
[[:db/add 56246616830509142 :tags :untrusted]
[:db/add 56246616830509142 :tags :verified]
[:db/retract 56246616830509142 :tags :unknown]]
#2019-11-1217:06favilaThe shape is correct, but validity depends on the schema of :tags#2019-11-1217:07Brian{:db/ident :tags
:db/valueType :db.type/keyword
:db/cardinality :db.cardinality/many
:db.attr/preds 'valid-tag?}
#2019-11-1217:08favilaok, so the types are valid; now it depends on whether the valid-tag? predicate returns true#2019-11-1217:13BrianI'm very confident that part is working properly as I've plugged things in to test it.
The problem I'm now having is:
(let [tx [(list* 'update-tags-tx hash hash-type tags)]]
tx (d/transact
conn
{:tx-data tx}))
is returning count not supported on this type: Keyword. Any ideas?#2019-11-1217:14BrianJust to be thorough:
(def valid-tags #{:trusted :untrusted :unknown :accepted :verified :unauthorized :malicious})
(defn valid-tag? [tags]
(every? valid-tags [tags]))
#2019-11-1217:24BrianOne thing I'm noticing is that the tx variable is wrapping the output of update-tags-tx in brackets however the update-tags-tx is returning a [ [...] [...] [...]] already so we're seemingly triply wrapping that which is odd to me but if I don't do that then d/transact yells at me and says it must be a list of a map#2019-11-1217:36BrianThis is my transaction function which when I test it at the repl it works totally fine but perhaps there is some count going on in here that I'm missing?
(defn update-tags-tx
"Transaction function which when given a map of hashes and a set of tags, will find the
entity who has those hashes and will update that entity's tags"
[db hash hash-type new-tags]
(let [eid (ffirst
(d/q
'[:find ?e
:in $ ?hash ?hash-type
:where
[?e ?hash-type ?hash]]
db
hash
hash-type))
current-tags (set (:tags (d/pull db '[:tags] eid)))
tags-to-add (clojure.set/difference new-tags current-tags)
tags-to-retract (clojure.set/difference current-tags new-tags)
tx (mapv (fn [addition] [:db/add eid :tags addition]) tags-to-add)
retractions (mapv (fn [retract] [:db/retract eid :tags retract]) tags-to-retract)
tx (reduce
(fn [state retraction]
(conj state retraction))
tx
retractions)]
tx))
#2019-11-1217:38favilaIsn’t list* not what you want here? #2019-11-1217:38favilaTags is one arg, not & tags#2019-11-1217:39favilaAnyway I don’t see a count in there. Does the ex-info data on the exception give clues as to what stage is failing, tx or ensure?#2019-11-1217:44Briantags referring to the valid-tags? function? I noticed that name was wrong too tag would be what I want.
I chose (list* ...) based on https://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter.clj#L172 because no variation of https://docs.datomic.com/cloud/transactions/transaction-functions.html#calling worked without errors#2019-11-1217:24johnjwhat's the common naming style for attrs? some places I see :user/firstName - others :user/first-name ?#2019-11-1404:04dvingoIs there a recommended practice for the scenario where multiple developers want to iterate (deploy code multiple times a day using :uname for example) on one datomic cloud stack? We're concerned with deploys overwriting each other, resulting in in-development functions being removed by whoever deployed last because their code doesn't have the in-dev code of someone else. One strategy we're considering is to all work on one git branch and always pull and push before deploying, but it would be great if there is a strategy that doesn't involve coordination.#2019-11-1406:17tatutwe are doing development locally with http-kit instead of pushing ions. Each developer has their own database, named "$USER-dev" they can freely play around in#2019-11-1406:17tatutthat doesn't work for tx functions#2019-11-1415:08dvingoyep, we are doing similar with jetty. I'm wondering specifically about deploys. Is the only solution one query group/stack per developer?#2019-11-1506:12tatutnot an expert in that, but the solo topology is so cheap you could easily have one for each developer#2019-11-1415:20grzmCurrently attempting to upgrade to com.datomic/ion-dev "0.9.240" from "0.9.231" and seeing
Execution error (IllegalArgumentException) at datomic.ion.cast.impl/fn$G (impl.clj:14).
No implementation of method: :-event of protocol: #'datomic.ion.cast.impl/Cast found for class: nil
#2019-11-1415:23grzmThis is during a normal cast/event call. The only thing I've changed is updated my deps. Same issue @U6Y72LQ4A reported over the weekend. Anyone else seeing this issue?#2019-11-1415:26grzmThe immediate issue is whether I need to update ion-dev to use the analytics features released in 512-8806.#2019-11-1415:39grzmIt looks like the issue is tied to the com.datomic/ion "0.9.35" release. If I leave ion at 0.9.34, it's fine.#2019-11-1415:47grzmIn my setup: com.datomic/ion "0.9.35" + com.datomic/ion-dev "0.9.234" doesn't work. com.datomic/ion "0.9.34" + com.datomic/ion-dev "0.9.240" does.#2019-11-1418:34pvillegas12Agree on this downgrade solving my problem as well#2019-11-1418:36grzmper @U051V5LLP, looks like calling (datomic.ion.cast/initialize-redirect :stdout) (or likely another redirect option) prior to making any cast is a viable workaround until this gets fixed.#2019-11-1415:39dvingois this happening when running the code locally?#2019-11-1415:39grzmYes. ion-dev is only used locally.#2019-11-1415:47dvingoI found that this worked to get rid of those errors locally: (datomic.ion.cast/initialize-redirect :stdout) - invoke it early in your code before any calls to cast#2019-11-1415:50grzmThat's interesting. I see that works here as well. That looks like a regression. This wasn't a requirement previously.#2019-11-1417:32grzm@U05120CBV What would be your preferred method of tracking this issue? Should I open a support ticket?#2019-11-1417:34marshallyes please#2019-11-1417:45grzmDone! Thanks!#2019-11-1417:39ssdevI'm currently seeing some strange behavior. It seems like my code is not being updated with my deploy. No matter what I deploy the same response keeps getting returned. At one point I deployed a function that returned some json (ex: {showGrid: true}), and even if I go in and hard code what that function should return (now just some text that says "test") it still returns that same json as before. There haven't been any deploy failures. Anyone have any ideas?#2019-11-1420:43henrikDoes it pass through a caching layer? CloudFront?#2019-11-1420:58ssdevno. and in fact invoking the function directly in the terminal seems to return the old json as well.#2019-11-1419:35Ian FernandezPeople, I want to store a field with a java.time.ZonedDateTime into Datomic, some recommendations?#2019-11-1419:51favilastore in a tuple?#2019-11-1419:52favila> A ZonedDateTime holds state equivalent to three separate objects, a LocalDateTime, a ZoneId and the resolved ZoneOffset. The offset and local date-time are used to define an instant when necessary. The zone ID is used to obtain the rules for how and when the offset changes. The offset cannot be freely set, as the zone controls which offsets are valid.#2019-11-1419:52favila(from javadoc)#2019-11-1419:54favilahttps://docs.datomic.com/on-prem/schema.html#heterogeneous-tuples#2019-11-1419:55favilamaybe encode the ymd as one long, hms as another, another long for offset, and a string for zoneid#2019-11-1419:56favilaif you want to ensure temporal sort order, perhaps the first element in the tuple can be the instant (java util date, or just a long of ms since the epoc) that the zoned-date-time would convert to#2019-11-1420:02ghadithis guy datomics#2019-11-1419:35Ian Fernandezmake another field with the Zone?#2019-11-1421:34ssdevupdate on the above issue, when we change the namespace of the code we are deploying, we then see the code update, but if we push and deploy the original namespace with different code, we never see updates, we see the old code running and returning that same old json object. Anyone know why this may be?#2019-11-1422:21m0smithHow do I move an Ion from a staging to a production environment when the :app-name has to be defined in the ion-config.edn but needs to change across environments?#2019-11-1423:34steveb8nQ: is there any kind of shutdown hook available when deploying a new Ion version? I want to call the component/stop fn in my servers so that resources are properly cleaned up#2019-11-1503:56onetomit seems i can't rename :db/id in pull expressions with the :as option:
(d/pull-many db '[[:db/id :as :eid]
[:txn/merchant :as :XXX]]
txns)
outputs:
[{:db/id 17592186045422}
{:db/id 17592186045423}
{:XXX {:db/id 17592186045418}, :db/id 17592186045424}
{:XXX {:db/id 17592186045418}, :db/id 17592186045425}]
is that intentional?
i can't find it documented in https://docs.datomic.com/on-prem/pull.html#as-option#2019-11-1816:10matthavenerI vaguely remember someone else talking about this a few months ago and a datomic rep said it was a known limitation or something#2019-11-1512:52Luke Schubertis there a sane/performant way to accumulate a concept of a score in a datalog query?#2019-11-1512:52Luke Schubertis there a sane/performant way to accumulate a concept of a score in a datalog query?#2019-11-1512:55Luke Schubertto get the idea of what I'm going for is given two people
Name | ArbitraryField | Id
Bob | A123 | 1
Steve | B321 | 2
I want to be able to run a query for (Bob, B321) where Name gives x points and ArbitraryField gives y points on a match and both are returned#2019-11-1513:04Luke SchubertI'm also fine with it returning something like
[[1 [:name]] [2 [:arbitrary-field]]
#2019-11-1515:30benoitDatomic works with set so you will have to associate the score to each result yourself with something of this shape:
(or-join [?name-q ?arbitrary-q ?id ?points]
(and [?id :name ?name-q]
[(ground x) ?points])
(and [?id :arbitrary ?arbitrary-q]
[(ground y) ?points]))
#2019-11-1515:48Luke Schubertah that's exactly what I'm looking for, thanks#2019-11-1515:51johnjBesides the UUID type, what other methods are there to generate unique natural numbers to store in a Long type without collisions? like the ones datomic generates (`:db/id`)#2019-11-1519:32fjolneThere’s not enough space for a universal uniqueness in 64 bits (that’s why all versions of UUIDs are 128 bit). Datomic likely uses the fact that all the new entity ids are generated sequentially (due to sequential transactor), so it could guarantee uniqueness via counters and/or clocks. #2019-11-1519:47fjolneI’d go with tx function which uses aggr max / index lookup + ensure attr has an AVET index. This should yield an O(1) / O(log n) time complexity. Query first approach would require CAS.#2019-11-1520:02fjolneAnd it would probably be more efficient to go down by negative ids, this way index lookup would require to realize only the head of the lazy index: (dec (first (index-range db :your/attr nil nil)))#2019-11-1520:33fjolneClock value (micro/nanoseconds) should also be ok in tx function due to sequentiality of txs (but not precalculated as tx data, as those are not sequential). IIRC they used it in Crux for :db/ids. And it probably worth mentioning that both approaches generate ids which are easy to guess by 3rd-party (not to say UUIDs are too hard to guess but still).#2019-11-1617:02johnjinsightful, thanks, taking notes#2019-11-1515:53johnjcould implement auto-increment as tx function or doing a query first but seems inefficient#2019-11-1519:16dvingoWhy not just use UUID?#2019-11-1523:42hiredmanhttps://docs.datomic.com/on-prem/identity.html#orgdbd68d2#2019-11-1523:43hiredmanthere are squuids, which are more or less uuids, which datomic generates to be sort of sequential which is sort of like http://yellerapp.com/posts/2015-02-09-flake-ids.html#2019-11-1620:22hadilsQ: My release Clojure code (shown below) no longer works on a git commit; I have to supply a uname argument to release to the cloud. Anyone else experiencing this? Is there a fix?
Here's the code:
(defn release
"Do push and deploy of app. Supports stable and unstable releases. Returns when deploy finishes running."
[args]
(try
(let [push-data (ion-dev/push args)
deploy-args (merge (select-keys args [:creds-profile :region :uname])
(select-keys push-data [:rev])
{:group (group)})]
(let [deploy-data (ion-dev/deploy deploy-args)
deploy-status-args (merge (select-keys args [:creds-profile :region])
(select-keys deploy-data [:execution-arn]))]
(loop []
(let [status-data (ion-dev/deploy-status deploy-status-args)]
(if (= "RUNNING" (:code-deploy-status status-data))
(do (Thread/sleep 5000) (recur))
status-data)))))
(catch Exception e
{:deploy-status "ERROR"
:message (.getMessage e)})))
I am currently on com.datomic/client-cloud "0.8.78" and com.datomic/ion-dev "0.9.240"
Here's the error message:
(release {})
=> {:deploy-status "ERROR", :message "You must either specify a uname or deploy from clean git commit"}
#2019-11-1703:21hadilsNvm I figured out the problem.#2019-11-1809:03onetomis it not possible to use reverse navigation style in tx-data within entity maps?
im getting an invalid lookup ref error:
{:cognitect.anomalies/category :cognitect.anomalies/incorrect,
:cognitect.anomalies/message "Invalid list form: [#:db{:id 17592186045418}]",
:db/error :db.error/invalid-lookup-ref}
when trying this:
{:db/ident :new-entity-being-pointed-to-by-a-card-many-attr
:card-many/_attribute [{:db/id 132} {:db/id 345} ...]}
or "Invalid list form: [17592186045418]" when just trying :card-many/_attribute [123 345]
it would be the inverse of a pull expression containing reverse navigation, eg:
;; pattern
[:artist/_country]
;; result
{:artist/_country [{:db/id 17592186045751} {:db/id 17592186045755} ...]}
https://docs.datomic.com/on-prem/pull.html#org31dcc1a#2019-11-1809:43mavbozo@onetom It's possible, but you have to specify the relationship one-by-one. e.g:#2019-11-1809:44mavbozo`#2019-11-1809:44mavbozo[{:db/ident :new-entity-being-pointed-to-by-a-card-many-attr
:card-many/_attribute 132}
{:db/ident :new-entity-being-pointed-to-by-a-card-many-attr
:card-many/_attribute 345}
{:db/ident :new-entity-being-pointed-to-by-a-card-many-attr
:card-many/_attribute ...}
{:db/ident :new-entity-being-pointed-to-by-a-card-many-attr
:card-many/_attribute ...}]#2019-11-1809:44mavbozo`#2019-11-1810:41onetomhmm... interesting. thx.
i guess it's simpler to just use the forward reference than and the :db/add function style#2019-11-1814:11babardoDatomic cloud question: is it possible to run local only integrations test without access to aws infra ?#2019-11-1814:11babardoI tried https://github.com/ComputeSoftware/datomic-client-memdb based on peer library for an in memory db.
But it looks like it doesn't support tuples during schema creation.#2019-11-1814:18favilait uses datomic-free by default, which hasn’t been updated in a while (i.e. since before tuples were introduced)#2019-11-1814:18favilatry excluding that and depending on a recent datomic-pro#2019-11-1814:18babardoOk i'll try that#2019-11-1815:35babardoThanks you, it worked with a datomic pro.#2019-11-1815:35babardoBut what about licensing ? (my company already is on a datomic cloud plan)#2019-11-1815:39favilahow did you get datomic-pro without a license? starter license?#2019-11-1815:41favilaanyway, all on-prem licenses are perpetual, so you can keep using this forever. plus you are not actually running a transactor#2019-11-1815:42favilaagreed this is an odd situation#2019-11-1815:43babardook thanks for your help, we'll try to find an answer from our side 🙂#2019-11-1818:59folconI don’t remember datomic string being limited to 256 chars, is this a change? Or am I misremembering?#2019-11-1819:03favilait should be 4096 and only on cloud#2019-11-1819:03favilahttps://docs.datomic.com/cloud/schema/schema-reference.html#org5a18448#2019-11-1819:12folcon@U09R86PA4 just wondering if there was ever any plan to do edn or blob type? Or is string supposed to be for that usecase?#2019-11-1819:14favilathey never give roadmaps, so I donno for sure, but this table talks about “LOB” types: https://docs.datomic.com/on-prem/moving-to-cloud.html#other#2019-11-1819:15favilaLikely this means the data goes to s3 and a pointer is stored in datomic#2019-11-1819:16favilathis is a technique you should use with large objects in datomic anyway (strings or binary) even for on-prem#2019-11-1819:16favilaon-prem doesn’t have hard size limits, but it’s still a bad idea#2019-11-1819:18folconYea, that’s the problem.#2019-11-1819:18folconIt worries me a little that this hasn’t been addressed yet…#2019-11-1819:18folconThanks though =)…#2019-11-1819:23folcon@U064X3EF3 Sorry to bug you, but just wondering if there’s any way of knowing if/when LOB types are planned for?#2019-11-1819:37Alex Miller (Clojure team)I'm not on the Datomic team#2019-11-1819:38Alex Miller (Clojure team)so I don't know any more than you :)#2019-11-1819:51folconFair enough 😃..#2019-11-1818:59folconCurrently trying to setup an import operation which is a bit fiddly#2019-11-1819:01colinkahnAre there any tools to validate datalog? For instance you can’t use (and ...) as a direct descendant of :where. My use case is to validate something that programmatically generates datalog from some input.#2019-11-1819:12Alex Miller (Clojure team)https://lambdaforge.io/2019/11/08/clj-kondo-datalog-support.html is new, might help#2019-11-1819:15dvingoFor anyone who may run into this in the future....
Our team was seeing datomic cloud deploys work for one developer while failing for another developer when we had the exact same clj files, deps.edn, and ion-config.edn files (copied and pasted from the dev who successfully deployed). The deploy ended up working on a clean git clone into a new directory. We figured out that we had run a "compile" in the local directory of the dev with the failing build and the "classes" directory was being executed instead of the new clj source files. Removing the classes directory solved our problem and we can now deploy.....#2019-11-1905:12onetomhow can i access the result of a built-in transaction function like :db/retractEntity so i can modify it?
i would need to replace some of the retractions with assertions containing new computed values#2019-11-1906:05onetomanswering my question:
(->> :db/retractEntity (d/entity db) d/touch :db/code)
reveals how it works:
=> "(clojure.core/fn [db e] (datomic.builtins/build-retract-args db e))"
and indeed it works:
(datomic.builtins/build-retract-args db :x)
=> [[2 17592186045418 10 :x]]
however, since it's not documented anywhere, im bit hesitant to use it 😕#2019-11-1912:16favilaUse d/invoke instead#2019-11-1913:05favila(d/invoke db :db/retractEntity db :x)#2019-11-1913:06favilahttps://docs.datomic.com/on-prem/clojure/index.html#datomic.api/invoke#2019-11-1916:20Dustin GetzDatomic forum is down#2019-11-1916:26Alex Miller (Clojure team)seems operational to me?#2019-11-1916:36adamfeldmanI also saw it was down, back up for me too#2019-11-1916:42Dustin GetzStill down for me#2019-11-1916:44mavbozoI still can not connect to datomic forum#2019-11-1919:46dvingonot sure why this isn't on the datomic site (at least that I could see), or why there's no testing page in general, but this is super useful:
https://www.youtube.com/watch?v=JaZ1Tm6ixCY#2019-11-1920:49dvingohmm, not as useful as I thought - it looks like a db created from the datomic.api ns cannot be passed to (d/q) in datomic.client.api..#2019-11-1920:49dvingoGetting: Query args must include a database when doing so#2019-11-1920:57dvingo(shrug) will just do this then:
(defn get-user-settings*
([db username]
(get-user-settings* d/q db username)) ;; <-- this is datomic.client.api
([q db username] ;; <-- in tests pass in datomic.api/q
(->> (q query-all-settings db username)
xform-user-settings)))
#2019-11-1921:00csmsomeone wrote a lib for testing datomic.client.api with an in-mem DB; I think it’s this: https://github.com/ComputeSoftware/datomic-client-memdb#2019-11-1921:03dvingoooh very cool. I'll take a look, thank you#2019-11-2000:20dvingohas anyone run into an issue where a "local/root" dependency will not be included in the zip file when doing a push? I'm getting a class not found error on deploy for one of the namespaces in a local root and when I unzipped the s3 asset it turned out that the source files are not being pulled into the build. I can compile and execute the code locally , so I'm at a loss for what's going on. There are also other local/root dependencies that are being included in the build.#2019-11-2001:07Alex Miller (Clojure team)local dep deps.edn changes may not force a recompute (at least in clj in general, not sure exactly about push). you might try using -Sforce#2019-11-2001:11dvingothanks! I'm not at my work computer so will let you know how it goes tomorrow#2019-11-2015:07dvingono luck 😞 even tried cloning the repo again to a new directory.#2019-11-2015:10dvingoany reason the datomic ions dev code couldn't be open sourced? This would make debugging problems like this at last tractable instead of poking random buttons of the opaque box.#2019-11-2016:49dvingoFigured out a way around this for now - to compile the app:
mkdir classes
clj -A:dev -e "(compile 'user-settings.core)"
;; add "classes" to deps.edn :paths
I have no idea why ions push is not working but the local/root deps are all in the classes and the deploy is now working. This seems like a bug.#2019-11-2020:06dmarjenburghI think only the files in the default class path are pushed, not paths in aliases. Maybe that could be it?#2019-11-2020:16dvingoThanks for the reply. Good call, unfortunately this is in the main :deps map#2019-11-2020:20dvingoAlso, the compile strategy stopped working and deploy was trying to run some very old version of the app. I'm not sure how this is happening or how I'm the first to run into this..#2019-11-2022:00dvingoOMG... it turned out to be that the local/root project did not have a :paths [] set. Add this :paths ["src"] got the dep to be included in the push....#2019-11-2022:02dvingoFigured it out because other local deps were being included just fine but they had :paths set.#2019-11-2016:34grzm@stuarthalloway As mentioned in person: Two nice-to-haves for the datomic cloud client api:
- some system generated unique value identifying a Datomic database so the application can confirm the database it's connecting to (say, a database with a particular db-name has been deleted and another created with the same name: I'd like to be able to detect that at an application level for things like automated testing)
- a way to map t value to tx eid, say I have a database, which returns a t value, I'd like to also know what tx that corresponds to.#2019-11-2103:01onetomwhen i was looking into how to get the uri of a database object i found this:
(.-db_id ^datomic.peer.Connection conn)
=> "m13n-8ba32b12-7f6e-4d64-bf95-f3e32c95d589"
im wondering if that uuid is actually such a db id u were talking about#2019-11-2020:03grzm@jaret Looking forward to seeing you at the Conj! As an aside, what's the current story with AWS integration testing with CodeDeploy/CodeBuild and cross-region S3 buckets? Is that still busticated? (Not that I consider it a Cognitect thing: I fully place that on AWS silliness.)#2019-11-2020:07jaretStill busticated in the sense you have to copy to a bucket in each region as far as I am aware.
> AWS CodeDeploy performs deployments with AWS resources located in the same region. To deploy an application to multiple regions, define the application in your target regions, copy the application bundle to an Amazon S3 bucket in each region, and then start the deployments using either a serial or parallel rollout across the regions.#2019-11-2020:08jaretI can double check with team and aws rep #2019-11-2020:09grzmThat would be AWSome. Feel free to invoke "our paying customers are (im)patiently waiting for this" on my behalf.#2019-11-2020:08grzmCheers. There are somethings that I really like about immutability: AWS immutability with respect to this issue is not one of them 😉#2019-11-2020:52grzmI've been getting some AWS emails about Nodejs 8.10 begin deprecated/removed in early 2020. I see that Datomic Cloud stuff spun up with the most recent versions of the templates includes Nodejs 8.10 runtimes. Is there going to be a release sometime soon that will include a newer runtime?#2019-11-2020:53marshallits in the pipeline, waiting for AWS to approve/ship it#2019-11-2023:31shaun-mahoodHope everyone on the Datomic team has a great time at the Conj - wish I could be there!
I found a bad link on the docs - https://docs.datomic.com/cloud/troubleshooting.html#troubleshooting-ions has a link to https://docs.datomic.com/ions/ions-reference.html#lambda-ion which doesn't seem to exist.#2019-11-2110:17dmarjenburghI’m trying to do a db/cas to update a db.type/ref attribute of an entity using lookup refs:
[:db/cas [:user/id uid] :user/team [:team/id old-tid] [:team/id new-tid]]
But this fails: Compare failed: [:team/id 123] 52549571713949802#2019-11-2110:17dmarjenburghIt seems like you can’t use lookup-refs for the “old-value” in a cas?#2019-11-2111:06favilaYeah, Cas doesn’t have any smarts about types#2019-11-2111:07favilaThe lookup ref works for the new value only because db/add is resolving it, not because cas is doing anything#2019-11-2113:08souenzzohttps://portal.feedback.eu.pendo.io/app/#/case/26858
My receptive "report" from 3 yrs ago.
It's still a undocumented behavior#2019-11-2120:54colinkahnDoes datomic treat var symbols with dots in them differently? Like ?bar vs ?foo.bar#2019-11-2120:55colinkahnDatomic is hanging when I use the dot version#2019-11-2210:22Per WeijnitzDoes anyone know if Cognitect is still in business? The forum link is dead (https://forum.datomic.com/) and we don't get any response from their support. Our agency needs to migrate to AWS Stockholm, but Datomic Cloud is not available there (the docs says "contact support for other regions", but that seems hard).#2019-11-2210:35schmeeforum is up for me at least#2019-11-2211:31Per WeijnitzAh, it's come back online, good.#2019-11-2213:26Per WeijnitzI have now received a response on my support ticket, so is good.#2019-11-2213:41Alex Miller (Clojure team)Very much still in business :)#2019-11-2213:42Alex Miller (Clojure team)The forum is a third party service and they seem to be having some issues this week#2019-11-2216:09cjmurphyInstalling an attribute is straightforward, for example you could submit this to the transactor:
{:db/ident :student/first
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
How would I go about retracting the same attribute?#2019-11-2216:23shaun-mahood@cjmurphy I don't think you can - it would cause issues with historical data in a system with data transacted to it.#2019-11-2216:28cjmurphyBut what about if that attribute was never used in any way? So all you did was that statement above, then you wanted to get back to a world where :student/first was no longer present, where you decided it was a mistake to have that attribute?#2019-11-2216:42shaun-mahoodI haven't found a way - it's only happened to me in development, so I have had to get used to forgetting that it exists until I recreate and repopulate my database (which I tend to do pretty often as I'm figuring out what attributes I want).#2019-11-2216:52johnj@cjmurphy I haven't tried, but doesn't [:db/retract 'identifier' :db/ident :student/first] work?#2019-11-2216:52johnj@cjmurphy I haven't tried, but doesn't [:db/retract 'identifier' :db/ident :student/first] work?#2019-11-2217:03cjmurphyWhat would identifier be there? Using :db/id did not work out:
Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:57).
:db.error/not-an-entity Unable to resolve entity: :db/id in datom [:db/id :db/ident :student/first]
#2019-11-2219:37johnjidentifier is the entity id eid of the attribute you created#2019-11-2219:38johnjin this case it would be the natural number datomic assigned to that attribute#2019-11-2219:39johnjschema attributes are just like domain attributes#2019-11-2219:39johnjyou can query them#2019-11-2305:49cjmurphySeems to have worked thanks :) I did a pull query using [:db/id] , where the eid arg usually goes I put the attribute, so would be :student/first here. I got a low number (3 digits) eid as a result. Then I plugged that into: [:db/retract eid :db/ident attribute-name] . Sending that to transact gave back the normal success result.#2019-11-2417:48fjolne@cjmurphy This seems to be an undocumented feature, and it behaves rather weird, because (d/ident db <attr-eid>) still returns the retracted ident, while (d/touch (d/entity db <attr-eid>)) doesn’t contain the ident anymore. The installation of the attribute with this ident creates a new attribute entity though, so this kinda works, but you’re still left in a kind of inconsistent state. A more conventional (and documented) approach would be to alias the attribute entity with some new ident, and then reuse the old one: (d/transact conn [{:db/id :student/first :db/ident :student.deprecated/first}])#2019-11-2417:50fjolneOr just always design schema on the forked version of the database: https://vvvvalvalval.github.io/posts/2016-07-24-datomic-web-app-a-practical-guide.html#forking_database_connections#2019-11-2417:59fjolneAlso, there’s no need for a separate query / pull to make a retraction of the ident, as idents are interchangeable with entity ids in transactions: (d/transact conn [[:db/retract :student/first :db/ident :student/first]])#2019-11-2216:53cjmurphyThanks @shaun-mahood yes. I can certainly see myself having a production system and putting new attributes in entities then deciding they were put in the wrong place. If there's no actual data (so no students have been put in the system in the example above), then - well it makes sense to be able to remove attributes so the schema remains clean (and identical to the yuppiechef schema that in my case is what's normally used to create attributes).#2019-11-2404:09favilaI’m using datomic analytics and want a :joins through two “levels” of refs. Is this supported/supportable? example: metaschema {:tables {:foo/id {} :bar/x {}} :joins {:foo/bar-card1-ref "bar" :bar/card1-enum-ref "db__idents"}} I expect/hope-for a foo.bar_card1_ref__card1_enum_ref__ident column, but there is none.#2019-11-2423:38ackerleytngWhen I run bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d hello,datomic:, is a transactor started somewhere in the background?#2019-11-2500:24bhurlowno, transactor must run as separate process (assuming on-prem). When transactor starts it stores it’s addressable location in the backend storage. When peer starts it finds that stored location and can then contact transactor directly#2019-11-2500:25bhurlowactually just seeing the mem flag here, with mem transactor is built in to the library, other protocols dev,`ddb` etc behave as above#2019-11-2512:55ackerleytngi see, thanks @U0FHWANJK!#2019-11-2510:43fjolneCouldn’t sign up for Datomic Forum (seems like it’s getting troubles lately), maybe somebody here knows: is it safe to open transactor port to the public, assuming it’s connected to a password-secured SQL storage?
I understand that peers first connect to the storage, then get transactor coordinates and connect to it, but couldn’t find the authorization mechanism between peer and transactor in the docs.#2019-11-2514:26bhurlowpeers communicate with transactor via an encrypted channel so it’s OK to host transactor on public IP. In fact, this is the only config that worked for us#2019-11-2514:26bhurlowhttps://forum.datomic.com/t/unable-to-connect-to-transactor-from-to-ec2-instances/568/2#2019-11-2522:11fjolne@U0FHWANJK thanks, at some point we had the same error, but setting host to internal network IP (inside VPC) made it to work. That’s good to know that the connection between peer and transactor is secure, but my concern is different: I wonder whether somebody else could actually transact / read something via the open transactor port, which is why I’m interested in the auth protocol (handshake?) between peer and transactor.#2019-11-2522:15bhurlowsomewhat sure all communication into transactor requires the “secret” value which is stored in backed storage#2019-11-2605:58fjolneUgh, it’s actually in the docs: https://docs.datomic.com/on-prem/aws.html
So, yes, connection from peers to transactor is secured via randomly generated credentials, and it’s ok to open transactor to the public.#2019-11-2620:39bhurlowstill felt a bit exposed to me too#2019-11-2510:49fjolneWe’ve currently secured transactor via firewall to allow only connections from the exact peer, but that’s kinda inconvenient for dev (requires ssh tunnelling) and autoscaling (requires to manage all the internal network IPs of our peers).#2019-11-2512:46frankyxhlHi,
I’m new to Datomic. Is there any advice or best practice if I’d like to connect Datomic in ClojureScript/Nodejs?
Thanks.#2019-11-2514:30bhurlowcloud or on-prem? There are no official peer libraries for cljs or node#2019-11-2516:25frankyxhlRight now using on-prem. But will use cloud in production.
Yes. I can’t find cljs library.#2019-11-2518:50grzm@jaret I know I asked you about whether or not PollingCacheUpdateFailed errors had been addressed recently, but I may have been overly distracted when you answered. (To refresh your memory: What we're seeing is part of our Datomic Cloud system stopping (a periodic CloudWatch Event that writes out to a Vertica database) while the rest of the system keeps humming along fine. I've seen PollingCacheUpdateFailed errors in the Cloudwatch logs that correlate with this.)#2019-11-2518:55jaret@grzm looks like… :
"Msg": "PollingCacheUpdateFailed",
"Cache": "CatalogCache",
"Err": {
"CognitectAnomaliesCategory": "CognitectAnomaliesFault",...#2019-11-2518:55jaret?#2019-11-2518:56jaretWhat version of Datomic Cloud are you running on this system?#2019-11-2519:10grzmYup:
"Msg": "PollingCacheUpdateFailed",
"Cache": "cache-group-poller",
"Err": {
"CognitectAnomaliesCategory": "CognitectAnomaliesFault",
"DatomicAnomaliesException": {
"Via": [
{
"Type": "com.amazonaws.SdkClientException",
"Message": "Unable to execute HTTP request: Too many open files",
"At": [
"com.amazonaws.http.AmazonHttpClient$RequestExecutor",
"handleRetryableException",
"AmazonHttpClient.java",
1175
]
},
{
"Type": ".SocketException",
"Message": "Too many open files",
"At": [
".Socket",
"createImpl",
"Socket.java",
460
]
}
This was with 480-8770. We've since upgraded to 535-8812 and haven't seen it since#2019-11-2519:13grzmSeeing that in various caches: index-group-poller, tx-group-poller, cache-group-poller, query-group-poller, autoscaling-group-poller. Looks like they generally happen in pairs or three at a time, mix-and-matching which cache groups are included.#2019-11-2519:14grzmOne that happens on its own is CatalogCache , with
{
"Type": "com.amazonaws.SdkClientException",
"Message": "Unable to execute HTTP request: Connect to [] failed: Read timed out",
"At": [
"com.amazonaws.http.AmazonHttpClient$RequestExecutor",
"handleRetryableException",
"AmazonHttpClient.java",
1175
]
},
#2019-11-2519:24jaretSo one of the causes of that error (pollingCacheUpdateFailed) was addressed and other causes as long as they are transient shouldn’t represent a problem. Re: the CloudWatch Event that writes to the Vertica DB are you seeing any other errors or any other correlations? are you deploying at the same time? is the event special in any way?#2019-11-2519:25jaretI’d be happy to poke at the metrics and logs if you want to give me read-only access.#2019-11-2519:30grzmHaven't seen other errors at the same time, which is why it's kinda been stumping us. No deploys either: it happens after the system's been running for at least a couple of days running fine. Just stops writing. Let me coordinate with the client and get back to you on the log access: that'll likely have to wait until tomorrow.#2019-11-2519:33jaretAnd you have to kick over the application or datomic to get it back up again? @grzm?#2019-11-2519:34grzmWe "redeploy" (same revision) and it all starts working again. (what would it mean to restart only Datomic?)#2019-11-2520:58tylerHas there been any news on the xray daemon for datomic compute nodes?#2019-11-2520:58marshall@tyler it is included on the nodes in the latest release#2019-11-2520:58marshallbut it’s up to you to configure/use it for now#2019-11-2520:58marshallmore docs/info coming in the future#2019-11-2521:00tyler:+1: awesome, we’re happy to configure it just need that daemon running. Thanks.#2019-11-2606:45cjmurphyWhen I create entities I always give them an attribute called :base/type , which is just a keyword, for instance :bank-account . I'd like to find all the entities (preferably that I've created) that don't have this attribute. I've asked this on Stack Overflow: https://stackoverflow.com/questions/58866423/find-all-entities-that-are-missing-a-particular-attribute, but no answers...#2019-11-2612:16marshallhttps://docs.datomic.com/cloud/query/query-data-reference.html#missing#2019-11-2612:19benoit@U0D5RN0S1 you would have to somehow reduce the number of entities to check otherwise you're doing a full db scan. So I would write first the clause to identify all the entities you've created and then from those, find all the entities without this attribute.#2019-11-2612:20benoitUnfortunately it is not explained in the missing? section of the docs and all examples just happend to have clauses before the missing? predicate that makes the query work (`[?artist :artist/name ?name]`)#2019-11-2620:01cjmurphyThanks @U963A21SL that brings things together for me. I can work with knowing the name of an attribute in the entity that may not have a :base/type.#2019-11-2620:01cjmurphy[:find [?entities ...]
:in $ :where
[?entities ::rule/splits ?s]
[(missing? $ ?entities :base/type)]]#2019-11-2612:14leongrapenthinWhere do I find the upgrade instructions from solo to production?#2019-11-2612:18marshallhttps://docs.datomic.com/cloud/operation/upgrading.html#2019-11-2612:18marshallJust choose a production compute stack instead of a solo compute stack.
You should be running a split stack first#2019-11-2613:45joshkhAn attribute which is unique by identity allows us to use its value instead of a :db/id to identify an entity within a transaction. If no entity exists with that value then an entity is created, otherwise facts are stored about the existing entity.
Do composite tuples work the same way? They are also :db.unique/identity, however they seem to operate more like :db.unique/value in that they throw an exception when the tuple value already exists.#2019-11-2613:50joshkhOr in other words, can I take advantage of a tuple to update an existing entity? For example, change this player's colour based on the combination of their first and last name:
{:tx-data [{
; composite tuple attributes:
:player/first-name "Jean-Luc"
:player/last-name "Picard"
; some fact to store about the existing entity
:player/colour "blue"
}]}
Edit: the code example throws a Unique conflict exception when transacted a second time#2019-11-2614:11marshallYou can indeed use a composite tuple for identity and upsert. You have to include the tuple itself in the transaction
So in your example, if the unique attribute is called :player/first+last, you need to include a value for :player/first+last in your next transaction to get upsert#2019-11-2614:11marshall@joshkh ^#2019-11-2614:28joshkhThanks @marshall, you just made my day.#2019-11-2614:32marshallFYI, discussed here also: https://forum.datomic.com/t/db-unique-identity-does-not-work-for-tuple-attributes/1072#2019-11-2614:35joshkhI came across that post a few months ago when I first encountered the same problem, but to be honest I didn't understand the resolution in the comments.#2019-11-2615:32joshkhi'm attempting to transact a tuple ident and am getting the following exception:
(d/transact (client/get-conn)
{:tx-data [
{:db/ident :user/first+last
:db/valueType :db.type/tuple
:db/tupleAttrs [:user/first :user/last]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
]})
Unable to resolve entity: :db/tupleAttrs
any ideas? i have no problem transacting it to another database an hour ago.#2019-11-2615:34favilaThis db was created with an older (pre-tuple) version of datomic?#2019-11-2615:34joshkhyes#2019-11-2615:34favilaYou need to transact these new schema attributes using administer-system#2019-11-2615:34ghadiIf so, you have to run d/administer-system as in the documentation#2019-11-2615:34ghadijinx#2019-11-2615:35favilahttps://docs.datomic.com/on-prem/deployment.html#upgrading-schema#2019-11-2615:37joshkhthis applies to cloud as well?#2019-11-2615:38joshkhsilly question, of course it does as i'm missing the attributes 😉#2019-11-2615:39joshkhgreat, that did the trick. thanks favila and ghadi!#2019-11-2616:16leongrapenthinis it technically possible/viable to downgrade the production primary compute group instance type to something cheaper as long as you don't have users?#2019-11-2616:17marshallthe supported instance types are fixed#2019-11-2616:17marshallyou can, however, reduce your ASG size to 1#2019-11-2616:17marshallif you don’t need HA#2019-11-2616:17leongrapenthini can see that#2019-11-2616:17leongrapenthinreducing asg size#2019-11-2616:17leongrapenthinthanks#2019-11-2616:18leongrapenthinstill, its a factor ten price difference at least#2019-11-2616:18marshallfrom solo - prod?#2019-11-2616:18leongrapenthinyes#2019-11-2616:19marshallfair enough; we are definitely taking feedabck and considering options#2019-11-2616:19marshallalso, you can ‘turn off’ the system over nights/weekends if it’s not a user-facing prod system#2019-11-2616:19marshallsame technique - ASG to 0#2019-11-2616:19leongrapenthinas long as my customer is not live, I will have difficulty explaining this price to him#2019-11-2616:19marshallwe’re also looking into tooling that will make that somewhat easier to do/manage#2019-11-2616:20leongrapenthinsimultaneously, I need the architecture at some point, to develop against prod. only features like http-direct#2019-11-2616:20leongrapenthinor have staging/test query group separation#2019-11-2616:21leongrapenthini need the platform running for 1-5 users testing on the customer site#2019-11-2616:21leongrapenthinturn off is no option#2019-11-2617:36favilaAre there plans to expose point-in-time queries via datomic analytics?#2019-11-2708:30tatutwhat's the best practice for "deleting" items and later being able to query them and restore them. In SQL you would record a deleted timestamp and use a WHERE deleted IS NULL in all queries... it seems in datomic one should just retract the entity?#2019-11-2708:30tatutbut it seems the pulling deleted entities is somewhat cumbersome, you need to get the deletion tx instant and pull from db that is as-of one millisecond before the deletion#2019-11-2708:31tatutand I guess reinstating the entity would be to reassert all the facts?#2019-11-2709:23cjmurphyA 'deleted' marker like that seems different to a retraction, so why not use the marker? Would be more convenient. Also see two kinds of time here: https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2019-11-2710:03tatutThat raises good points#2019-11-2710:03tatutBut the down side is that every query would need to filter out deleted items in where clause#2019-11-2710:30cjmurphyTrue enough. And you might want it to be an instant in 'event time' rather than a boolean. Also you could have another entity with all the same attributes and then the 'deleted' attribute as well. More work do do at the time of deletion (shuffling it into another entity), but then you wouldn't have to filter out for everyday queries.#2019-11-2714:06ghadiEvery query has to do the same in SQL#2019-11-2805:17tatutThat is true, and I've always disliked it... you can work around it in SQL by using views.#2019-11-2712:16leongrapenthin@tatut suggestion: create a new entity type like {:retraction/before tx, :retraction/eid eid}. When you retract your entity, invoke a transaction function that creates the "retraction" entity. Under :retraction/before, store the tx of the t of the in-transaction database. You can later use it to restore the db as-of the time of deletion. Under :retraction/eid store the entity ID of the retracted entity.#2019-11-2712:18leongrapenthinthis would be out of the way of your usual queries and gives you the ability to find it again, reference the "retraction" in other context etc.#2019-11-2712:18leongrapenthinI'd only do it if I have to, because usually a history or log query does the job if I want to restore something deleted#2019-11-2801:26yannvahalewyn> org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read artifact descriptor for com.datomic:datomic-pro:jar:0.9.5981
Not sure what to look for to debug this. I followed the steps in my.datomic (~/.m2/settings.xml and :mvn/repos are set correctly). Can someone nudge me in the right direction?#2019-11-2802:13yannvahalewynIf anyone found this by googling the error, verify you have this in settings.xml:
<settings xmlns=""
xmlns:xsi=""
xsi:schemaLocation=" ">
<servers>
<server>
<id></id>
<username>{email}</username>
<password>{password}</password>
</server>
</servers>
</settings>#2019-11-2802:02yannvahalewynThe issue was that I just copied over the settings.xml from my.datomic, but the example is not a complete example but rather just one key. It does link to the maven docs but I didn’t notice it. It’s not super intuitive what is expected for younger devs like me who have no experience with Maven.#2019-11-2802:05yannvahalewynIt took me a full 40 minutes to figure this out, now who has time for that? 😄. Any plans to streamline onboarding a bit? A better and working example would be useful imo, especially pulling in the peer library.#2019-11-2815:56yannvahalewynI noticed other devs of various levels of experience share my feelings about the onboarding. Seems like a shame to me since big improvements can be made with a couple of simple steps. This is your first introduction to otherwise amazing software and may turn potential customers away early.#2019-11-2812:26kardanI was trying to split my Cloudformation (Datomic cloud) stack do to an upgrade for the first time. When deleting the root stack I get an error for the nested compute stack, as DELETE_FAILED LambdaSecurityGroup resource <uuid> has a dependent object (Service: AmazonEC2; Status Code: 400; Error Code: DependencyViolation; Request ID: <uuid>). Anyone has any pointers on what to do / read up?#2019-11-2816:17jaretIn general, Datomic delete/upgrade will only delete/modify resources it created. If any of the resources it uses have been modified it will not delete that resource. Have you changed the security group or added any resources to the security group?#2019-11-2816:19marshallThis is likely the lambda ENI deletion delay issue#2019-11-2816:20marshall@U051F5T93 after youve waited an hour or so, try deleting again#2019-11-2816:40kardanI’ll try again. Don’t think I’ll created much more than what’s in the guides. (but this was a while ago, so might be wrong on this). Will need to be of to handle kids and stuff for a while so will check in later to see if it succeeds. Thanks for the pointers.#2019-11-2816:46marshallThere is a recent change in how aws handles lambda enis that affects their deletion. The current solution from aws is "wait an hour and try again"#2019-11-2904:21kardanTried twice (with a nights sleep in between) and failed again. Could it be a problem that I created a web lambda before splitting the stack?#2019-11-2907:30kardanHitting my connected API gateway with a browser it now responds with 500 Internal Server Error#2019-11-2907:30kardan(this is however not anything in production)#2019-11-2918:00marshallThe lambda should be deleted, unless you created something manually out of band#2019-11-2918:01marshallYou can delve into the error in the cloudformation stack and determine what specifically failed to delete#2019-11-2918:02marshallIf it is a lambda ENI, that is caused by a recent change aws made to vpc resident lambdas#2019-11-2918:02marshallYou may need to look in the vpc console or the list of security groups to determine what resources are still present#2019-11-2919:54kardanOk, will dig in deeper#2019-11-2919:54kardanThanks#2019-11-3007:21kardanDeleted the lambda security group manually and then went on to delete everything. Will start over from scratch again. Thanks for the pointers.#2019-11-2820:01bartukaI was experiencing some issues between the async/datomic parts of my project and decided to perform a small experiment. I wrote a simple query that returned 5 entity ids and used the go to emit some parallel queries against my database [I'm on the datomic cloud]. This is the whole code:
(defonce client (d/client config))
(defonce connection (d/connect client {:db-name "secland"}))
(defn query-wallet []
(-> (d/q '[:find ?e
:where
[?e :wallet/name _]]
(d/db connection))
count
println))
(dotimes [_ 9] (async/go (query-wallet)))
If my dotimes is less than 8 it works fine and print my results. However, with 9+ parallel queries, it hangs and nothing happens. From the terminal the output of the tunnel is only:
debug1: channel 3: free: direct-tcpip: listening port 8182 for port 8182, connect from 127.0.0.1 port 45492 to 127.0.0.1 port 8182, nchannels 11
debug1: channel 10: free: direct-tcpip: listening port 8182 for port 8182, connect from 127.0.0.1 port 45506 to 127.0.0.1 port 8182, nchannels 10
debug1: channel 6: free: direct-tcpip: listening port 8182 for port 8182, connect from 127.0.0.1 port 45498 to 127.0.0.1 port 8182, nchannels 9
debug1: channel 7: free: direct-tcpip: listening port 8182 for port 8182, connect from 127.0.0.1 port 45500 to 127.0.0.1 port 8182, nchannels 8
debug1: channel 8: free: direct-tcpip: listening port 8182 for port 8182, connect from 127.0.0.1 port 45502 to 127.0.0.1 port 8182, nchannels 7
debug1: channel 2: free: direct-tcpip: listening port 8182 for port 8182, connect from 127.0.0.1 port 45410 to 127.0.0.1 port 8182, nchannels 6
debug1: channel 4: free: direct-tcpip: listening port 8182 for port 8182, connect from 127.0.0.1 port 45494 to 127.0.0.1 port 8182, nchannels 5
debug1: channel 5: free: direct-tcpip: listening port 8182 for port 8182, connect from 127.0.0.1 port 45496 to 127.0.0.1 port 8182, nchannels 4
debug1: channel 9: free: direct-tcpip: listening port 8182 for port 8182, connect from 127.0.0.1 port 45504 to 127.0.0.1 port 8182, nchannels 3
I would like to know more about this issue. Why 7 parallel processes? This query is super simple and fast. Is this a configuration issue?#2019-11-2820:15Joe LaneYou're doing blocking IO inside of a go-block. Never do this. In core async there are 8 threads in the core async threadpool, when you perform blocking io in a go block you can deadlock that threadpool.
If you are using the client api in a non-ion project, you can use the async part of the api ( https://docs.datomic.com/client-api/datomic.client.api.async.html ) to leverage core async from the datomic client.
The async part of the client api DOES NOT WORK IN AN ION.#2019-11-2820:16Joe LaneYou would have this problem with anything doing blocking io in go blocks.#2019-11-2822:16bartukaAlright!! thanks for the explanation. I will change the implementation#2019-11-2823:18bartukaI tried to implement this using datomic asynclibrary.
(defonce client-async (d-async/client config))
(def ch-conn (d-async/connect client-async {:db-name "secland"}))
(def out-chan (async/chan 10))
(def times 9)
(dotimes [_ times]
(async/go
(->> (d-async/q {:query '[:find ?e
:where
[?e :wallet/name _]]
:args [(d-async/db (async/<! ch-conn))]})
async/<!
(async/>! out-chan)
)))
(dotimes [_ times]
(println (count (async/<!! out-chan))))
But I still get the same error. When I change to a version using async/thread it works fine. So, probably I am still doing IO blocking right now. Can you spot the error?#2019-11-2906:05tatutin datomic cloud (prod topology) I'm getting errors in the lambda cloudwatch logs. how fatal are these?
{:cognitect.anomalies/category :cognitect.anomalies/unavailable, :cognitect.anomalies/message "Connection reset by peer", :clojio/throwable :.IOException, :clojio/socket-error :receive-header, :clojio/at 1574976823963, :clojio/remote "10.213.37.146", :clojio/queue :queued-handler, :datomic.ion.lambda.handler/retries 0}#2019-11-2908:34onetomhow come the latest (`569-8835` Nov 27th) version of datomic cloud solo only supports N.Virginia region?
$ curl -s | jq .Mappings.RegionMap
{
"us-east-1": {
"Datomic": "ami-066145045fc7a4ad0",
"Bastion": "ami-09e416d6385c15902"
},
"us-east-2": {
"Datomic": "",
"Bastion": ""
},
"us-west-2": {
"Datomic": "",
"Bastion": ""
},
"eu-west-1": {
"Datomic": "",
"Bastion": ""
},
"eu-central-1": {
"Datomic": "",
"Bastion": ""
},
"ap-southeast-1": {
"Datomic": "",
"Bastion": ""
},
"ap-southeast-2": {
"Datomic": "",
"Bastion": ""
},
"ap-northeast-1": {
"Datomic": "",
"Bastion": ""
}
}#2019-11-2918:00marshallAWS marketplace issue. Use the latest version listed in the datomic docs releases oage#2019-11-2918:00marshallPage#2019-11-2917:58Oleh K.I wasn't able to create today a Solo stack on new AWS account with the latest template (of 27 Nov), it just fails. Is it a known problem?#2019-11-2917:58marshall@okilimnik use the latest version listed on the datomic docs releases page#2019-11-2917:59marshallThe newest on AWS has several problems that we are attempting to resolve#2019-11-2917:59Oleh K.@marshall thanks!#2019-11-2921:10bartukahi, I would like to understand a little better the behavior of datomic under load. For example, what is happening on the index-mem-mb metric in this situation?#2019-12-0320:54Linus EricssonDatomic seems to keep the most recent transactions in memory to batch write them efficiently. If I understand correctly, it means that the transactions only store the tx-data for each transaction (and can batch even that) which means Datomic eventually has to re-calculated its indexes. This is probably what can be seen in the last part of the graph, where the JVM obviously does a lot of work GCing, and then, when the index memory is full, recalculated the db and empties its batched up index memory.#2019-11-2921:11bartukaI noticed that for sometime, my application was processing 200msg/s from rabbit and after 15 min of operation it went down to 10msg/s I am looking at the metrics to figure out the mechanics here#2019-11-3004:56Jacob O'BryantCould org.clojure/tools.reader on ions be bumped up to, say, version 1.3.2? (It's on 1.0.0-beta4 currently). I'm trying to deploy a fulcro app, but I'm getting this error:
$ clj -Sdeps '{:deps
{org.clojure/tools.reader {:mvn/version "1.0.0-beta4"}
com.fulcrologic/fulcro {:mvn/version "3.0.10"}}}' \
-e "(require 'taoensso.timbre)"
Syntax error (IllegalAccessError) compiling at (clojure/tools/reader/edn.clj:1:1).
reader-error does not exist
It's the same error discussed at https://github.com/ptaoussanis/timbre/issues/263.
Alternatively, does anyone know how to fix this without changing the tools.reader version? I've tried messing around to no avail. Unfortunately I don't really understand what's causing the error in the first place. Interestingly, it works if I omit fulcro but still require the same versions of timbre and encore:
$ clj -Sdeps '{:deps
{org.clojure/tools.reader #:mvn{:version "1.0.0-beta4"}
com.taoensso/timbre {:mvn/version "4.10.0"}
com.taoensso/encore {:mvn/version "2.115.0"}}}' \
-e "(require 'taoensso.timbre)"#2019-11-3022:09Jacob O'Bryantupdate: I forgot about AOT-compiled jars. The problem was that fulcro depends on clojurescript, which includes tools.reader. adding :exclusions [org.clojure/clojurescript] to fulcro seems to have fixed it. It also fixed a problem with transit-clj , though for that I also had to fork fulcro and remove a call to cognitect.transit/write-meta.#2019-12-0207:21tatutAny ideas why Ion lambda is throwing exception "Key must be integer" from datomic ion code (not my app code)?#2019-12-0207:21tatut{"Msg":"IonLambdaException","Ex":{"Via":[{"Type":"java.lang.IllegalArgumentException","Message":"Key must be integer","At":["clojure.lang.APersistentVector","assoc","APersistentVector.java",347]}],"Trace":[["clojure.lang.APersistentVector","assoc","APersistentVector.java",347],["clojure.lang.APersistentVector","assoc","APersistentVector.java",18],["clojure.lang.RT","assoc","RT.java",823],["clojure.core$assoc__5401","invokeStatic","core.clj",191],["clojure.core$update","invokeStatic","core.clj",6198],["clojure.core$update","invoke","core.clj",6188],["datomic.ion.lambda.api_gateway$gateway__GT_edn","invokeStatic","api_gateway.clj",93],["datomic.ion.lambda.api_gateway$gateway__GT_edn","invoke","api_gateway.clj",87],["datomic.ion.lambda.api_gateway$edn_handler__GT_gateway_handler$fn__3198","invoke","api_gateway.clj",109],["datomic.ion.lambda.api_gateway$gateway_handler__GT_ion_handler$fn__3202","invoke","api_gateway.clj",114],["clojure.lang.Var","invoke","Var.java",384],["datomic.ion.lambda.dispatcher$fn__2154","invokeStatic","dispatcher.clj",47],["datomic.ion.lambda.dispatcher$fn__2154","invoke","dispatcher.clj",45],["clojure.lang.MultiFn","invoke","MultiFn.java",244],["datomic.ion.lambda.dispatcher$handler_fn$fn__2156","invoke","dispatcher.clj",61],["datomic.clojio$start_server$socket_loop__2356$fn__2360","invoke","clojio.clj",204],["datomic.clojio$start_server$socket_loop__2356","invoke","clojio.clj",203],["datomic.clojio$start_server$accept_loop__2363$fn__2364","invoke","clojio.clj",219],["clojure.core$binding_conveyor_fn$fn__5739","invoke","core.clj",2030],["clojure.lang.AFn","call","AFn.java",18],["java.util.concurrent.FutureTask","run","FutureTask.java",266],["java.util.concurrent.ThreadPoolExecutor","runWorker","ThreadPoolExecutor.java",1149],["java.util.concurrent.ThreadPoolExecutor$Worker","run","ThreadPoolExecutor.java",624],["java.lang.Thread","run","Thread.java",748]],"Cause":"Key must be integer"},"Type":"Event","Tid":1075,"Timestamp":1575271105341}#2019-12-0208:49favilaYou are calling update on a vector using a non-integer key#2019-12-0211:39tatutI know what the exception means, it is not coming from my code, but datomic's#2019-12-0211:39tatutbut it seems in this case it was due to trying to call the lambda with aws cli, instead of thru api gw#2019-12-0314:09Brian AbbottIs it possible to run Datomic Cloud on Fargate?#2019-12-0315:00ghadiNo#2019-12-0315:00ghadiYou can connect fargate clients to datomic cloud though, @briancabbott #2019-12-0315:03Brian AbbottSorry, that is what I mean. Is there somewhere that I could find some documentation on how to do that? #2019-12-0315:03Brian AbbottDoes anyone here on this channel have experience doing it?#2019-12-0315:10ghadiYep, there’s a few things to take care of: right subnet && right IAM policy on the fargate task role#2019-12-0408:02Oleh K.when I create fargate service I need to create subnets for it. But in order to connect to datomic a service must be in the datomic subnet (as the documentation says). If I create a service and put in it datomic subnets then the service fails to even start. Can you give some insights where to look for the problem?#2019-12-0409:02ghadiConfirm that the datomic subnets and the fargate subnets are mutually routable#2019-12-0409:02ghadiAnd confirm that the “nodes” security group is augmented to include ingress from your new subnets #2019-12-0409:08ghadiAre you doing peering @U5JRN2PU0 or same vpc?#2019-12-0409:10Oleh K.I wasn't able to connect datomic with fargate in the same VPC, so now I'm trying peering#2019-12-0409:10Oleh K.what do you mean by "Confirm that the datomic subnets and the fargate subnets are mutually routable" ?#2019-12-0409:14ghadiCan they send packets to each other?#2019-12-0409:14ghadiPeering works too, but there are different steps (see the documentation)#2019-12-0409:18ghadihttps://docs.datomic.com/cloud/operation/client-applications.html#2019-12-0411:42Oleh K.I've managed to do connecting within the same VPC from different subnets, thanks you!#2019-12-0412:52ghadinice -- what did you have to do @U5JRN2PU0?#2019-12-0412:52ghadijust for the other readers that might be watching this thread#2019-12-0412:55Oleh K.I just allowed ingress traffic in <system>*-nodes security group from my services subnets' cidrs#2019-12-0412:57ghadinice.#2019-12-0315:10ghadiOther than that the code is the same in the jvm#2019-12-0315:32bartukahi, I have some questions about datomic analytics.. what is the "active workers" that we see at the presto server gui ? I have 3 instances for my query-group but I still get 1 active worker and 0 worker parallelism#2019-12-0316:26marshallThe presto server itself runs on your access gateway instance#2019-12-0316:27marshallit will use parallelism on that instance, but analytics support doesnt currently use multiple presto workers#2019-12-0316:32bartukaahn, cool. I am still trying to proper configure the query-group for my analytics needs. i) I noticed that on the cloud watch dashboards no action was happening on the query-group[I run the x-ray table from metabase on a 100M datoms database] and the bastion server had 100% cpu and this single worker node was the bastion server indeed.#2019-12-0316:36marshallyou can choose a larger instance type for the access gateway instance#2019-12-0316:38bartukabut if this is the case, I dont understand the purpose of the query-group itself. I thought those instances were performing the hard-work.#2019-12-0316:45marshallsome of the work, yes (the datomic DB work)
but some of the work (the SQL processing) happens on the gateway instance#2019-12-0316:36marshallI believe there are 3 or 4 choices#2019-12-0316:36bartukahowever, I found an error on the sync -q <query-group-name> config and fixed it [it was not recognizing the parameter and was taking the system name instead]. But now, the S3 bucket for analytics/ has two folders, one for my system-name and another with query-group-name. Is it right?#2019-12-0316:39markbastianI have a very minor feature request for the datomic team. When you issue a push request the response contains the deploy command (e.g. clojure -A:ion-dev '{:op :deploy, :group elided, :rev \"0876503319a40bafffc8525f0597b1355b94b587\"}'). The rev entry contains escaped quotes so I always have to paste this into a shell and then cursor over to the slashes and remove them. Any chance we can get a version in a future release that doesn't include the slashes? The status-command output does not require doing the above.#2019-12-0316:43marshall@iagwanderson seems fine - what was the thing you changed in the script?#2019-12-0316:44bartukathe problem was not in the script, I think I was calling it using shinstead of bash.#2019-12-0316:45marshallah ok#2019-12-0316:45bartukabut how do I tell the bastion to use the folder of the query-group-name? Or I shouldn't?#2019-12-0316:46bartukaI'm poking around and I deleted the folder system-name and everything stop working 😃 Presto complains that no catalog was found. It seems it was always looking at the system-namefolder#2019-12-0316:48marshall@iagwanderson you need to set the AnalyticsEndpoint in the primary compute group#2019-12-0316:48marshallwhen you go into the cloudformation for the primary compute group#2019-12-0316:48marshallthere is a parameter for#2019-12-0316:48marshall“Analytics Endpoint
Provide the name of a query group if you’d like analytic queries to go to a different endpoint. Defaults to system name.”#2019-12-0316:48marshallput your query group name in there#2019-12-0316:49marshallthat will cause the access gateway to direct its analytics queries to the query group you specify#2019-12-0316:52bartukaI cannot find this option when I click in update in the cloudformation. It should be done when I first create the primary compute group?#2019-12-0316:53marshallit is available either way
Are you running a split stack? you can’t do it with a master stack system#2019-12-0316:53bartukaI have a master stack system.. 😕#2019-12-0316:53marshallyou’ll want to split the stack#2019-12-0316:57bartukagreat! thanks for the help. I think would be nice to have these infos in the documentation Analytics Support -> Configuration. It seems I only needed to add the -q <query-group-name> . I don't know, I am using this stack for over a month now and probably missed some instructions in the docs too.#2019-12-0316:59marshallYeah, we’ll fix that#2019-12-0316:54marshallhttps://docs.datomic.com/cloud/operation/split-stacks.html#2019-12-0316:54jarethttps://forum.datomic.com/t/datomic-cloud-569-8835-and-cli-0-9-33/1277#2019-12-0317:06bartuka@marshall just to recap the question about the usefulness of the query-group with analytics-support in mind. Makes sense to say that for the analytics support, the access-gateway should be priority when we talk about machine-sizing rather than the query-group instances itself? As I understand, the presto can only make few operations before it start the SQL-processing of the data in memory. Maybe an access-gatewaymore optimized for memory-intensive tasks should be a good call?#2019-12-0321:32tylerAre http-direct requests each executed in their own thread?#2019-12-0322:04emCurious about this as well, and about the threading of ions in general#2019-12-0322:23markbastianHey all, I've got a datomic ion I'm trying to deploy and I keep getting a runtime error: "Syntax error (ClassNotFoundException) compiling new at (core.clj:79:38).
com.fasterxml.jackson.core.exc.InputCoercionException". This class was added in 2.10 (https://fasterxml.github.io/jackson-core/javadoc/2.10/com/fasterxml/jackson/core/exc/InputCoercionException.html). In my deps.edn file I specify com.fasterxml.jackson.core/jackson-core {:mvn/version "2.10.1"} in my :deps map. However, when I push with clj -A:ion-dev '{:op :push}' I get a dependency conflict warning for jackson-core listing com.fasterxml.jackson.core/jackson-core #:mvn{:version "2.9.8"} as the version being used. This leads me to believe my specified version isn't taking. Any ideas as to how I specify/force the runtime version of a library in my datomic instance?#2019-12-0322:47markbastianI was able to change my deps.edn file version of the jackson libs to 2.9.8 and it appears to be unbreaking. I'll just have to watch for version issues when pushing. I'm still interested in knowing if there's a way to control the deployed versions of the ion so that it uses the latest jars in the cloud.#2019-12-0409:49jaihindhreddyAhh. Good 'ol jackson.#2019-12-0417:57Jacob O'BryantDatomic's dependencies can't be overridden unfortunately.
https://docs.datomic.com/cloud/ions/ions-reference.html#dependency-conflicts#2019-12-0402:35bartukayet on system planning for analytics support. I split my stack and managed to make the presto server to use a query group with 2 instances i3.xlarge which seems fine. Looking at the cloudwatch during some workload, the query group is not reaching cpu utilization above 40% which is ok by me. However, it still gets 6min to return a query like select date, sum(value) from table group by date with a table of 5MM "rows" (in pg), way too slow (?) 😕 As we can see in the screenshot the largest access-gateway available in cloud formation has only 2 processors and it is constantly on 100% usage during workload. The 4gb of memory looks like enough, but 2 cpu is not too low?#2019-12-0403:04bartukaIn fact, I see this behavior when running more than 1 query at the time. Didnt notice the x-ray launched some queries in the database. But I think the point is still valid. Would be possible to use a larger instance for access-gateway?#2019-12-0407:57Oleh K.I want to connect Datomic Cloud via VPC Peering and there is a note in the end of documentation:
If your application does not run in the provided datomic-$(SystemName)-apps security group, you must configure the datomic-$(SystemName)-entry security group to allow ingress from your application's security group.
But I don't see any *-entry security group in my environment. What security group have I to modify?#2019-12-0407:58Oleh K.it's a production topology#2019-12-0409:39Oleh K.@ghadi My question above was right about documentation)#2019-12-0413:49marshall@okilimnik That section at the bottom that you mentioned is specifically for legacy versions of Datomic Cloud, prior to 397#2019-12-0413:49Oleh K.I see, thanks#2019-12-0420:31John ContiAnyone using Datomic cloud from Heroku?#2019-12-0421:12rgorrepatiHi, Does any know if the datomic transactor is ported to jdk 11#2019-12-0512:32maxtIs com.cognitect/transit-clj still at version {:mvn/version "0.8.285"} in Ions? Any chance of getting it updated? At least to 0.8.313 which is what client-cloud depends on.#2019-12-0513:31mkvlris there a workaround for datomic.extensions/< not being variadic? Is there a better way than adding multiple clauses when trying to exclude date ranges from a result via a query?#2019-12-0513:31Luke SchubertIs there a way to run the datomic transactor on windows? I consistently get an error on startup that the input line is too long which appears to be related to classpath construction?#2019-12-0513:33Luke SchubertI have previously been using WSL which works fine for me, but I'm scripting out running local environments for our testers and they all run windows and I'd rather not have to have them all install WSL#2019-12-0513:35Joe LaneWindows historically has had an issue with java classpath lengths being too long. I have no idea if fixing this will allow you to run the transactor on windows, but it may not be a datomic specific issue, rather a windows+java issue.#2019-12-0513:37Luke Schubertah rats that means I'm going to probably have to go about this the harder way.#2019-12-0513:49favilaOne trick for getting around this is to write the classpath into a jar manifest then run the jar with java -jar#2019-12-0513:50favila(the “runner” jar has nothing in it but a manifest with a classpath)#2019-12-0513:57Luke Schubertis there an upper limit on the java version for a transactor?#2019-12-0513:59Luke SchubertBecause another solution as I understand could be to build a classpath file instead of the CP_LIST#2019-12-0514:22Alex Miller (Clojure team)there are some speculative generic fixes for this for clj on windows#2019-12-0514:22Alex Miller (Clojure team)not sure if that applies here#2019-12-0514:22Alex Miller (Clojure team)maybe you're not using clj so it doesn't matter#2019-12-0514:27Luke Schubertwhat I'm trying to do is run bin/transactor.cmd in a script to start a transactor#2019-12-0514:43favilaI was told by Marshall I think (although I saw nothing official) that java11 is supported#2019-12-0514:45Luke SchubertI think I'm just going to go down the path of windows users are going to have to have wsl.#2019-12-0514:45favilayour “classpath file” can be the jar with manifest.#2019-12-0514:46Luke Schubertactually yeah, you're right, I like that much better.#2019-12-0514:47Luke SchubertThanks for all the help#2019-12-0514:47favilabuild CP_LIST with spaces instead of colons, write to a file, like Class-Path: CP_LIST then jar cfm cplist.jar Manifest.txt#2019-12-0514:48favilaactually you can just distribute the jar by itself, since that classpath isn’t going to change#2019-12-0516:01dazldI’m debugging a colleague’s valcache setup - is there a function to see what the current config that datomic has loaded and understood?#2019-12-0516:02dazldseems like it’s ignoring the JVM options that are being passed to it.#2019-12-0516:07dazld@datomic.config/properties-ref guess this, thanks anyway#2019-12-0518:47emhttps://aws.amazon.com/blogs/compute/announcing-http-apis-for-amazon-api-gateway/
Could be interesting for Datomic Ions! Wondering about if HTTP direct could be supported too, though lambda works out of the box#2019-12-0523:51hadilsHi., I'm trying to upgrade to the latest stoarge and compute stack in Datomic Cloud (569-8835). I repeatedly get this error on the storage upgrade: Modifying service token is not allowed.. I have created and assigned a role for CloudFormationFullAccess to my user, and also attempted this with my root user. Get the same error. I have been told by Cognitect that this is an IAM problem but when I had it the last time I made these changes to IAM and fixed the problem. Can anyone give me a pointer as to what to do now? Thanks.#2019-12-0607:33maxtI had the same problem upgrading to 569. I gave up.#2019-12-0615:13jaretHi @U0773UB6D Do you recall if this was your first upgrade on a split stack? If so, we have identified this issue as a bug and are working to address in a future release. In the interim, you can get around the issue by running the upgrade with “reuse existing storage” set to false and it should succeed. Note, you will still have your existing storage and it will be used, this option just moves the CF down an alternate path.#2019-12-0615:14maxtI don't think this is the first upgrade. I'll try that workaround.#2019-12-0609:19nickikA while ago a watched a presentation by Stu about a Typed Java DSL for accessing Datomic? Does this exist anywhere? I can't find any information on it.#2019-12-0610:43dmarjenburghQuestion about keywords in Datomic. In https://docs.datomic.com/cloud/schema/schema-reference.html#orgaf99dce it says:
> Keywords are interned for efficiency.
What does this mean? I know keyword literals are interned in Clojure. Does Datomic intern keywords it encounters? Say keywords are dynamically generated (by parsing incoming JSON requests from a client for example) and are stored as keywords in Datomic. Does Datomic have an optimized way of storing/querying them?#2019-12-0617:03dmarjenburghAfter experimenting, it seems clojure also interns dynamically created keywords, so does datomic do anything special in addition?#2019-12-0618:58benoitInteresting case I encountered today when renaming an attribute. It seems that in order to rename an attribute without downtime you have to:
1. update your code to specifically pull your old attribute (`[:old/attribute]`), [*] will return the new attribute name as soon as you change the schema in step 2
2. update the schema {:db/id :old/attribute :db/ident :new/attribute}
3. update your code to use the new attribute
Does it make sense? Do I miss something?#2019-12-0619:04favilaCorrect. I think the lesson should be “don’t use star” if you know the specific attribute you want. Star is for repl exploration not produciton code#2019-12-0619:05benoitIt seems that way. I wanted to double check here before I accept the lesson 🙂#2019-12-0915:24matthavenerdoes anyone know of an idiomatic or straightforward way of storing a datomic query in datomic? pr-str / read-string seems like the best?#2019-12-0915:29ghadithat can work, or you can store a symbol that refers to the query var in code space#2019-12-0915:29ghadithen you resolve or requiring-resolve the symbol#2019-12-0915:46Joe LaneBonus points if you include the codebase in the datomic schema for that symbol 🙂#2019-12-0921:52kennyAre there datomic specs bundled with datomic cloud by chance?#2019-12-1003:19GobinathHi Channel 👋
<https://clojurians.slack.com/archives/C03S1KBA2/p1575946578414400>
I'm pondering on pros and cons of using Datomic in production at scale.
Got the below comment regarding the same. Please do share your thoughts?#2019-12-1004:00johnj@thegobinath small data, high reads is the sweet spot#2019-12-1004:02steveb8nIMHO the current biggest con is no export tools. the recommendation seems to be that backups are not required but that just doesn’t fly in the enterprise world where I provide services. Not sure what to do about this yet#2019-12-1005:16GobinathDisadvantages
It can be slow, as Datalog is just going to be slower than equivalent SQL (assuming an equivalent SQL statement can be written).
If you are writing a LOT, you could maybe need to worry about the single transactor getting overwhelmed. This seems unlikely for most cases, but it's something to think about (you could do a sort of shard, though, and probably save yourself; but this isn't a DB for e.g. storing stock tick data).
It's a bit tricky to get up and running with, and it's expensive, and the licensing and price makes it difficult to use a hosted instance with it: you'll need to be dealing with sysadminning this yourself instead of using something like Postgres on Heroku or Mongo at MongoHQ
Source: <https://stackoverflow.com/questions/21245555/when-should-i-use-datomic>
So, how is the current situation with regards to the disadvantages of Datomic as described in this stackoverflow thread?#2019-12-1008:50henrik@U064X3EF3 Mentioned that Cloud doesn’t use a single transactor. I’m not sure of the details, but I presume that if you create more than one DB in Cloud, there’s no need to sync writes between those DBs as they are isolated from one another.
The sysadmin bit also doesn’t really apply to Cloud (though you’ll have to deal with AWS in some capacity). Getting it up and running is pretty much just clicking through a wizard.
Pricing wise, a solo deployment lands at around $30-$40/month. For production, it depends a lot on usage.#2019-12-1013:39Alex Miller (Clojure team)Saying that datalog is slow compared to sql is literally nonsense (in the literal sense of “literal”). The rest of that post reads as out of date (pre Cloud). The whole point of cloud is that the environment is largely built for you and makes best use of aws.#2019-12-1013:41Alex Miller (Clojure team)@henrik right re dbs#2019-12-1011:40marshallThe presumption that datalog is slower than sql is incorrect#2019-12-1012:21val_waeselynck@thegobinath I'd say the number 1 disadvantage of Datomic is the time you have to spend explaining why you're using it... and the fact that it's not open-source of course, which is a deal breaker for some people.
For the rest, I think it's more objective to talk in terms of limitations rather than disadvantages. Let me lay those out:#2019-12-1012:22val_waeselynck1. Datomic is not low-level storage. Don't use it for high-churn data, blobs, etc. Use if for accumulating facts, only that.#2019-12-1012:26val_waeselynck2. Datomic will be challenging if you have a high write throughput or data size (official rule of thumb: 10 billion datoms is the limit). It will be even more challenging if the relationships in the data have poor locality (this is a rare condition: a large graph with long-range relationship is an example. The usual enterprise system will be fine).#2019-12-1012:27val_waeselynck3. Most developers don't know it. I don't think it's hard to learn, especially for juniors, but your developers have to be able and willing to learn.#2019-12-1012:29val_waeselynck4. It's pretty much married to the JVM as a platform. You can call it from other platforms, but will lose many of the advantages.#2019-12-1012:29val_waeselynck5. It's not lean in terms of computational resources: the minimum deployment will have a high footprint.#2019-12-1012:32val_waeselynck6. It has essentially no support for all but 'relational' queries (fulltext etc.), and performs poorly on big aggregation queries.#2019-12-1012:34val_waeselynck7. It's not a bitemporal system, people often have misplaced expectations regarding this, because of the temporal reputation of Datomic.#2019-12-1012:36mpenet8. AWS only (not considering on-prem)#2019-12-1012:37val_waeselynckYes, if we're only considering Cloud I could add a few more limitations.#2019-12-1012:41val_waeselynckI still believe Datomic is the best technical option for the most mainstream use case of databases: online information systems with high reads and non-trivial transactional writes, a natural relational / graphical data model, and acting as a source of information for downstream systems.#2019-12-1012:43henrikAdd the Cloud-specific ones, for completeness.#2019-12-1012:44val_waeselynckHow so? On-Prem is an option for the others.#2019-12-1012:44henrikOh, sorry, I didn’t realise the question was about on-prem.#2019-12-1012:45val_waeselynckI don't know that the question was about one specific deployment strategy 🙂#2019-12-1012:48henrikWell, it’s quite a bit more than deployment strategy, right? The “sysadmin” bit in the post above applies more to on-prem than Cloud. And with Cloud, you’re married to CodeDeploy, for better or worse, etc.#2019-12-1012:50val_waeselynckYes I fully agree, I was only refraining from going into these specifics.#2019-12-1103:32johnjand acting as a source of information for downstream systems.
Like some kind of meta database?#2019-12-1117:11val_waeselynckNo, like the «sales» system upstream of the «emailing» and «analytics» systems#2019-12-1012:51val_waeselynck@thegobinath note: the SO post you mention predates Datomic Cloud, so some parts of it are no longer true! Especially the "It's a bit tricky to get up and running with" part as mentioned by @henrik#2019-12-1013:43GobinathOk. Apart from the challenges with Learning/Deploying, what it would be like if Twitter/Reddit had chosen Datomic (with Clojure of course)?
Reddit uses Postgres+Cassandra
Twitter uses MySQL#2019-12-1013:51Alex Miller (Clojure team)That seems like something impossible to answer#2019-12-1013:54GobinathYeah. That's slightly a stupid question :) I'm just considering similar use case with similar volume of Data transactions#2019-12-1014:11val_waeselynckEveryone reivent their own database system at that scale#2019-12-1014:14val_waeselynckNeither Reddit nor Twitter started with something having the capacity to deal with their current scale, and that's fine#2019-12-1014:22GobinathMakes sense. So one can be safe by starting out Datomic and come up with their own solutions to deal with Scaling. Innovation is born out of necessity :)#2019-12-1014:23henrikThe hugely interconnected nature of social graphs, where users can be expected to ad-hoc interact with any other user or any piece of content, seem like a problem hard to target without talking about a lot of infrastructure beyond the database.#2019-12-1014:23Alex Miller (Clojure team)you might note that Nubank started with Datomic and is now the largest fintech company in Latin America, still using Datomic#2019-12-1014:24GobinathOne great example (not to do with DBs) is what Facebook did with php#2019-12-1014:24GobinathRecently, how Discord used Rust to speed up Elixir#2019-12-1014:24Alex Miller (Clojure team)they have done a lot of excellent engineering to allow them to make the most out of Datomic#2019-12-1014:24henrikNubank’s credit cards does seem like something that would be easier to compartmentalize than a social network. No user should interact with any other user’s data.#2019-12-1015:27souenzzoBut you can add friends and chat with (support) people 🤔#2019-12-1015:28souenzzoThere is also a personal timeline, from a social net, the unique missing feature is feed from your friends#2019-12-1015:57henrikRight! But those things seem like they can be cleanly sliced per customer. If they support families, it becomes a different matter. Then you might want to make sure that they sit in the same DB I suppose.#2019-12-1014:25mpenetyes, it's heavily sharded if I recall correctly#2019-12-1014:27mpenetthey probably use datomic for other things tho. Every "db" has limitations/tradeoffs#2019-12-1014:34Mark AddlemanOne point related to Datomic Cloud's single transactor per db model: If I recall correctly, as of a year ago, you cannot use Datomic's datalog to join data across dbs but the problem was an implementation detail. I don't know if that has been resolved. If it's been fixed and your transaction boundaries don't cross dbs, then Datomic might scale very well given query groups#2019-12-1016:44grzmI'm trying to test a database function I intend to use as an entity predicate. My thought is to use it in a query: for example, identifying entities that currently violate the predicate. Something like this:
(d/q '[:find (sample 1 ?e)
:where
[?e :some/attr]
[(com.grzm/valid? $ ?e) ?valid]
[(not ?valid)]]
db)
Works in dev. Doesn't work in prod. In prod, I get the following error:
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
Unable to find data source: $__in__3 in: ($ $__in__2 $__in__3 $__in__4 $__in__5)
- Dev and prod have the same sha deployed.
- Both have the same version of Datomic Cloud (535-8812).
- Dev is solo, prod is, um, production.
Save me, Obi-Wan Kenobi. You're my only hope.#2019-12-1016:44grzmI'm trying to test a database function I intend to use as an entity predicate. My thought is to use it in a query: for example, identifying entities that currently violate the predicate. Something like this:
(d/q '[:find (sample 1 ?e)
:where
[?e :some/attr]
[(com.grzm/valid? $ ?e) ?valid]
[(not ?valid)]]
db)
Works in dev. Doesn't work in prod. In prod, I get the following error:
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
Unable to find data source: $__in__3 in: ($ $__in__2 $__in__3 $__in__4 $__in__5)
- Dev and prod have the same sha deployed.
- Both have the same version of Datomic Cloud (535-8812).
- Dev is solo, prod is, um, production.
Save me, Obi-Wan Kenobi. You're my only hope.#2019-12-1017:16grzm@marshall @U1QJACBUM anything?#2019-12-1018:22jaretIs this in an Ion?#2019-12-1018:23jaretAm I correct in understanding, the only difference is a solo topology for one system (working) and a production topology for another system (not working)?#2019-12-1018:41grzmThat's the only difference I'm aware of. I'm running that query from the repl (it's an ion in the sense that it's an allowed function), only changing my proxy connection between the two.#2019-12-1018:43marshallare you sure the ns with the valid? function is available in both?#2019-12-1018:43marshalli.e. deployed to both#2019-12-1018:47grzmYes. If I typo the name of the function, I get a is not allowed by datomic/ion-config.edn error instead.#2019-12-1019:24jaretCould you try two things?
1. use an explicit in
2. try passing in a specific entity ID to check validity#2019-12-1019:26jaretfor number 1. it would look like:
(d/q '[:find (sample 1 ?e)
:in $
:where
[?e :some/attr]
[(com.grzm/valid? $ ?e) ?valid]
[(not ?valid)]]
db)#2019-12-1019:27jaretIf all that still fails, I’d like you to log a support case with <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> so we don’t lose this to slack archive.#2019-12-1019:37grzmNumber 1 fails with the same error.#2019-12-1019:39grzmNumber 2 succeeds (passing in an eid, no sample)#2019-12-1019:41grzmSo, the question becomes how do I write the query to return entities that fail the predicate? Would be nice to be able to use sample, as I don't want to necessarily perform an exhaustive search.#2019-12-1019:42marshallAggregations in the find don't change the amount of work performed by the query#2019-12-1019:42marshallThey only shape the result#2019-12-1019:43marshallThings like sample and limit do not "short-circuit" the query#2019-12-1019:44grzm(d/q '[:find ?e
:where
[?e :some/attr]
[(com.grzm/valid? $ ?e) ?valid]
[(not ?valid)]]
db)
Returns
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
[?valid] not bound in expression clause: [(not ?valid)]
#2019-12-1019:47marshallYou may have to use an explicit not join#2019-12-1019:48marshallHm#2019-12-1019:48marshallDoes your predicate just return true or false#2019-12-1019:49marshallIf so, I think you want to put your predicate call inside of the not#2019-12-1019:49marshallNo need for the valid variable#2019-12-1019:59grzmThat's promising. Now I'm just getting timeouts and 503s. This is something I can work with. Thanks!#2019-12-1019:59grzmAny idea why sample works in solo and not in production?#2019-12-1020:00marshallnot immediately; we’ll look into it though#2019-12-1020:02grzmCheers!#2019-12-1020:02grzmWant me to open a ticket?#2019-12-1020:02marshallsure, that’d be helpful#2019-12-1116:04grzmNew wrinkle:
(d/q '[:find ?e
:in $ ?from ?until
:where
[?e :some/time ?t]
[(<= ?from ?t)]
[(< ?t ?until)]
(not-join [?e]
[(com.grzm/valid? $ ?e)])]
db from until)#2019-12-1116:07grzmWhen the range of from/until returns a small set, it completes fine. When it returns a large set (just changing range), it fails with Unable to find data source: $__in__3 in: ($ $__in__2 $__in__3 $__in__4 $__in__5)#2019-12-1116:08marshallCan you file a ticket with that info please#2019-12-1116:18grzmYup. Haven't done the one from yesterday. Same ticket or two?#2019-12-1116:19marshallSame#2019-12-1217:35grzmMore follow-up: there was some data in the production database which was causing one of the subsequent queries within the database function to fail. Given the nature of the error messages, it wasn't obvious to me where in the stack the error was happening.#2019-12-1017:01alidlorenzo@val_waeselynck you mentioned that "Datomic will be challenging if you have a high write throughput or data size." Do you think datomic could work for a note-taking style app? i wanted to take advantage of point-in-time queries, but documents will have high data sizes#2019-12-1017:13johnjDatomic doesn't do well with large strings, so much that in cloud they are restricted to 4096 chars.#2019-12-1017:16johnjAs @val_waeselynck said, datomic is not a bitemporal system, you should not rely on tx time to model time in your domain/business logic#2019-12-1017:16johnjcreate your own time attrs#2019-12-1017:20Joe LaneRemember @UPH6EL9DH, in cloud you have access to literally all of aws and their services. You could put a note at a point in time into s3 backed by cloudfront and store the reference to it in datomic. You can use cloudsearch or opendistro for searching as well.#2019-12-1017:42Alex Miller (Clojure team)I don’t understand how Datomic is not a bitemporal system (if you use it that way).#2019-12-1017:43Alex Miller (Clojure team)You have both transaction times and, if desired, attributes for event times, with the ability to query taking both into account#2019-12-1017:57alidlorenzo@U0CJ19XAM thanks for the tip about saving note documents to s3 hadn't considered that#2019-12-1018:03johnjoh yeah you can, the question is if you should use datomic's history features for domain logic, in contrast to just use it for auditing/troubleshooting. https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html?t=1&cn=ZmxleGlibGVfcmVjc18y&iid=1e725951a13247d8bdc6fe8c113647d5&uid=2418042062&nid=244+293670920#2019-12-1119:54val_waeselynckhttps://clojurians.slack.com/archives/C03RZMDSH/p1575999720198400?thread_ts=1575997270.193600&cid=C03RZMDSH
Because Datomic provides no support for expressive bitemporal queries, in the same way that MySQL et al provide no support for expressive temporal queries.
Choosing to "use it that way" is not enough. Sure, you can encode bitemporal information in Datomic, but it won't be particularly practical to leverage it.#2019-12-1017:38johnjDatomic does not provide a mechanism to declare composite uniqueness constraints - does this still holds now that there is composite tuples?#2019-12-1018:26jaretDid you see this in the docs? Could you throw me a link. Because, you’re correct this is no longer true with the addition of composite tuples.#2019-12-1018:26jaretNVM just saw your link.#2019-12-1017:38Joe LaneIMO, No.#2019-12-1017:38johnjOk, that sentence is still in the docs <https://docs.datomic.com/cloud/schema/schema-reference.html#db-unique-identity>#2019-12-1018:27jaretWill correct. You and @U0CJ19XAM are correct. That is no longer true with the introduction of Composite Tuples.#2019-12-1018:28Lone Rangerwhoaaa there is composite uniqueness now? Am I hearing this correctly?#2019-12-1018:28Lone Rangerif-so... huzzah!#2019-12-1018:36jarethttps://docs.datomic.com/cloud/schema/schema-reference.html#composite-tuples#2019-12-1018:36Alex Miller (Clojure team)since June...#2019-12-1019:17Ike MawiraHello, I am having trouble setting up Datomic as mentioned here, https://clojurians.slack.com/archives/C053AK3F9/p1576002374401100 , I get
ActiveMQNotConnectedException AMQ119007: Cannot connect to server(s). Tried with all available servers. org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory (ServerLocatorImpl.java:799)
Any reason why I could be getting this error?#2019-12-1019:33Ike MawiraSeems like the issue is a Netty library, when i run
(d/create-database "datomic:")
I get a warning,
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by io.netty.util.internal.PlatformDependent0 (file:/home/ike/Documents/softwares/datomic-pro-0.9.5697/lib/netty-all-4.0.39.Final.jar) to field java.nio.Buffer.address
WARNING: Please consider reporting this to the maintainers of io.netty.util.internal.PlatformDependent0
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
ActiveMQNotConnectedException AMQ119007: Cannot connect to server(s). Tried with all available servers. org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory (ServerLocatorImpl.java:799)
While in IntelliJ i get this extra info
WARNING: All illegal access operations will be denied in a future release
Dec 10, 2019 10:27:44 PM org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector createConnection
ERROR: AMQ214016: Failed to create netty connection
.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source)
#2019-12-1019:51marshall@mawiraike are you running a transactor on your local machine?#2019-12-1019:52Ike MawiraYes, i got the
System started datomic:<DB-NAME>, storing data in: data
message so i think so.#2019-12-1019:59marshall@mawiraike https://forum.datomic.com/t/important-security-update-0-9-5697/379#2019-12-1019:59marshalli would also recommend that you upgrade to a more recent version. that release is 1.5 years old#2019-12-1020:00marshallif you’ve had this storage system running before, you may be hitting that change ^ with h2#2019-12-1020:04Ike MawiraOkay, thanks @marshall, lemme update and see if it passes.#2019-12-1022:18Jon WalchWhat would the datalog look like for "give me the top ten users with the most cash"
I tried
{:query '[:find ?user-name (max 10 ?cash)
:in $
:where [?user :user/cash ?cash]
[?user :user/name ?user-name]]
:args [db]}
#2019-12-1022:23favilaDatalog doesn’t do sorting or truncating. You would do this in two queries#2019-12-1022:24favilaor one plus a pull#2019-12-1022:18Jon Walchbut this gives me every user#2019-12-1110:02GobinathSo, this is now a open question :)
https://clojurians.slack.com/archives/C03RZMDSH/p1575947979130900
What are the most favourite DBs for Clojure ecosystem and community in general?#2019-12-1112:41Luke SchubertThe other day I was having an issue with running the transactor and console on windows due to java classpath sizes and I found a super simple solution so I wanted to drop it here in case it's useful for anyone else
as of java 6 cp supports wildcards so you can remove the two for loops in ./bin/classpath.cmd and replace them with SET CP_LIST="bin;resources;lib/**;datomic-transactor*.jar"*#2019-12-1117:53Adrián Rubio MorloteHi! I'm kinda noobie on datomic, does anyone know the difference between "Production" and "Production 2" Topologies??#2019-12-1117:54marshall@adrian169 that is an artifact of AWS Marketplace issues
You should use whichever contains the latest release#2019-12-1117:54Adrián Rubio MorloteAlso, I don't really know how to configure instances so they are not i3.large#2019-12-1117:54marshallin the Production topology you can choose i3.large or i3.xlarge#2019-12-1117:54marshallthe Solo topology uses a smaller instance#2019-12-1117:54Adrián Rubio MorloteOhhhhh thanks!#2019-12-1117:55Adrián Rubio MorloteHmmm isn't there a way to use smaller instances#2019-12-1117:55Adrián Rubio Morloteon production?#2019-12-1117:55marshalli3.large is the smallest supported instance type in production topology#2019-12-1117:55marshallsee: https://docs.datomic.com/cloud/whatis/architecture.html#topologies#2019-12-1117:55marshallfor some additional information#2019-12-1117:56marshallalso useful: https://docs.datomic.com/cloud/operation/planning.html#2019-12-1117:56Adrián Rubio MorloteThank you so much!#2019-12-1118:36johnjIs there a way to directly omit the :db/ident key in a pull expression for an "enum"? having only its value returned#2019-12-1118:42johnj{:db/ident :green} => :green#2019-12-1119:56favilaNo, it is not possible. You have to postprocess with e.g. clojure.walk#2019-12-1120:34johnjjust used update for this simple case, definitely going to need clojure.walk on the next one, thanks#2019-12-1119:02John MillerI’m having trouble with upserts on entities with tuple identities containing a ref. The only way I can get upserts to work is to look up the entity id of the ref and use that in the query. Tempids work for the initial insert but then fail with an identity conflict. Lookup refs don’t work at all. And tuple does not appear to work in transact. Here’s a repro script:
(d/transact dt-conn {:tx-data [{:db/ident :example/r
:db/valueType :db.type/keyword
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
{:db/ident :example/id
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
{:db/ident :example/ref
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :example/multi
:db/valueType :db.type/tuple
:db/tupleAttrs [:example/ref :example/id]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}]})
(d/transact dt-conn {:tx-data [[:db/add "one" :example/r :one]]})
(d/transact dt-conn {:tx-data [{:example/ref [:example/r :one]
:example/id "foo"}]}) ; Succeeds once - Fine, need to include the identity tuple
(d/transact dt-conn {:tx-data [{:example/ref [:example/r :one]
:example/id "bar"
:example/multi [[:example/r :one] "bar"]}]}) ; Fails - "Invalid tuple value"
(d/transact dt-conn {:tx-data [[:db/add "ONE" :example/r :one]
{:example/ref "ONE"
:example/id "baz"
:example/multi ["ONE" "baz"]}]}) ; Succeeds once. Then fails - "Unique conflict: :example/multi, value [...] already held by ..."
(d/q '[:find ?e :where [?e :example/r :one]] (d/db dt-conn)) ; Put the resulting id in the next query
(d/transact dt-conn {:tx-data [{:example/ref [:example/r :one]
:example/id "qux"
:example/multi [<insert value here> "qux"]}]}) ; Succeeds upsert
Any suggestion on how to make this work?#2019-12-1121:25dominicmI'm getting " handshake timed out" when connecting to a datomic free transactor using the datomic clojure client api:
Dec 11, 2019 9:18:41 PM org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector createConnection
ERROR: AMQ214016: Failed to create netty connection
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source)
I vaguely recall there being some changes to all this, but I don't remember the detail.#2019-12-1121:38dominicmI set encrypt-channel to false#2019-12-1121:50steveb8nQ: I’m running Ions using the “connect on use” pattern when my app startup calls a component/start of my stack. My stack becomes unstable after a few CI deploys and I suspect that the lack of a component/stop call before the new stack is started is leaking resources such as aws clients. What is the recommended way to shutdown stacks during deploys? Is there a hook in one of the step functions used in deploy to address this?#2019-12-1217:28grzmSeeing memory allocation errors on the BeforeInstall step during deploy to a solo Datomic cloud instance.
[stderr]OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000faa00000, 72351744, 0) failed; error='Cannot allocate memory' (errno=12)
#2019-12-1217:29grzmIf I recall in the past when this happens, just deploying again fixes it. Doesn't seem to be the case this time. No new deps or big code changes. Version 535-8812 (solo)#2019-12-1217:30grzmThe rollback after the failed deploy also is failing.#2019-12-1218:58grzmEnded up turning down the Autoscaling from 1 instance to 0, and back to 1 to bounce the solo instance. Deploy then started working again.#2019-12-1220:47steveb8nI have the same problem. Roughly every 3rd ci build. To fix I just kill the EC2 instance and the alb brings up a new one automatically#2019-12-1220:48steveb8nThe deploy works again. It has to be memory#2019-12-1220:50grzmThat sounds painful. Terminating the instance is probably a faster way than modifying auto-scaling each time (twice!). Thanks for the tip!#2019-12-1223:14steveb8nglad to help. just make sure you don’t terminate the bastion 🙂#2019-12-1222:13steveb8nQ: I’m about (tomorrow) to create a production cloud topology. I’d like to verify my thinking before I do this. Is there someone at Cognitect I can talk to about this?#2019-12-1301:02Brian AbbottRandom… do we have any idea of how many datomic-deployment instances exist in the world? I am contemplating a book proposal to one of the major tech publishers… Justifying potential market size would be helpful. Also, is there anything anyone here would like covered beyond the beaten path for a DB book.#2019-12-1303:00souenzzoAt NuBank, a single bank from Brasil, there is more then 1000 transactors instances#2019-12-1315:26johnjouch#2019-12-1315:34onetomwe shall link their latest video, where they talked about this, to give some credibility to this statement.
@U2J4FRT2T are you working for nubank?#2019-12-1316:10souenzzoI will dig a video presentation with this info.
Not working but we use the same stack and I talk a lot with nubankers#2019-12-1302:50onetomDid anyone ran into not being able to start the socks proxy for a Datomic Cloud Solo installation?
$ datomic-access client -p dev -r ap-northeast-1 enterprise-sandbox
download: to ../../.ssh/datomic-ap-northeast-1-enterprise-sandbox-bastion
fatal error: An error occurred (404) when calling the HeadObject operation: Key "enterprise-sandbox/datomic/access/private-keys/bastion.hostkey" does not exist
Unable to read gateway hostkey, make sure your AWS creds are correct.
Where is the best place to find answers to these kind of questions?#2019-12-1302:52onetomMy Solo version is the one before the last (Nov something marketplace version).
datomic-cli is 0.9.33#2019-12-1302:55onetomi also don't quite understand why would there be some private key stored in s3 to my system.
when i was starting up the solo system i was asked for a keypair.
isn't that keypair is the one which is used for both the primary compute group instances and for the bastion host too?
where is this documented?#2019-12-1303:13onetomsince it was only complaining about the hostkey, i've removed that step from the datomic-access script and accepted the hostkey manually,
BUT I had to enable SSH access in the bastion host's security group manually too.
i understand it's a very cautious default, but is it documented anywhere?
i've read a lot of docs and seen tons of videos, but none of those mentioned this requirement.#2019-12-1314:16marshall@onetom https://docs.datomic.com/cloud/getting-started/configure-access.html#authorize-gateway#2019-12-1315:31onetomthis section has eluded me somehow...
no idea why.
the documentation looks great and makes sense now.
thanks a lot for giving direction!#2019-12-1314:16marshall@onetom the latest Datomic CLI requires the latest release of Cloud#2019-12-1315:27onetom@U05120CBV since the nov 26th/27th was not working on nov 29th (when you said it's an "AWS marketplace issue")
and i haven't seen any newer images on the marketplace, i just assumed it's still not working.
i did take a peek at the release notes, but since it did not have any entries newer than nov 29th, it also suggested that i should just use the previous to last version.
for the record, on the 29th of nov (HKT), the problematic cloud formation template url i found somehow from the aws console looked like this:
$ curl -s | jq .Mappings.RegionMap
{
"us-east-1": {"Datomic": "ami-066145045fc7a4ad0", "Bastion": "ami-09e416d6385c15902"},
"us-east-2": {"Datomic": "", "Bastion": ""},
"us-west-2": {"Datomic": "", "Bastion": ""},
"eu-west-1": {"Datomic": "", "Bastion": ""},
"eu-central-1": {"Datomic": "", "Bastion": ""},
"ap-southeast-1": {"Datomic": "", "Bastion": ""},
"ap-southeast-2": {"Datomic": "", "Bastion": ""},
"ap-northeast-1": {"Datomic": "", "Bastion": ""}
}
and it seems to be modified on nov 15th the last time:
$ curl -Is
HTTP/1.1 200 OK
x-amz-id-2: gvBaTBQB/yS5MVpBf2kzkSHfS9mLZQbvts4BhOTrkrEjq6pD3g+g4ydf0m4knsJ+WBoPwN+FwDE=
x-amz-request-id: AF9C1169F7991A06
Date: Fri, 13 Dec 2019 15:16:05 GMT
x-amz-replication-status: COMPLETED
Last-Modified: Fri, 15 Nov 2019 15:27:33 GMT
ETag: "ae57666cf395e15fe66809522c78c92c"
x-amz-version-id: GIC.pH3ALeoNUyyaU_3.bmehFNcZL2E2
Accept-Ranges: bytes
Content-Type: application/octet-stream
Content-Length: 130983
Server: AmazonS3
now the release page (https://docs.datomic.com/cloud/releases.html#release-history) links to a slightly different url for the "same 569-8835 version", which is dated 11/26/2019, but it's last modified date is dec 03:
$ curl -Is
HTTP/1.1 200 OK
x-amz-id-2: NO6zf0g58YfTPGxMSrZtg+99UNgEqx++RkChsDhEhcSU3GIze3uMZ373Dkg3by/EQWDQSTIYz5A=
x-amz-request-id: C70010966DC504E4
Date: Fri, 13 Dec 2019 15:15:52 GMT
Last-Modified: Tue, 03 Dec 2019 14:25:29 GMT
ETag: "6b67112cbd9bddb2ee4d59de71e5e6a3"
Accept-Ranges: bytes
Content-Type: binary/octet-stream
Content-Length: 107111
Server: AmazonS3
and it contains AMIs for many regions, as expected:
curl -s | jq .Mappings.RegionMap
{
"us-east-1": {"Datomic": "ami-0b853443711d20708", "Bastion": "ami-07dccb5098034c24d"},
"us-east-2": {"Datomic": "ami-0ee324fea6a1a937e", "Bastion": "ami-0d2278c155c1d6754"},
"us-west-2": {"Datomic": "ami-0ccaf9cb58eaa44db", "Bastion": "ami-0f410f80475d0894e"},
"eu-west-1": {"Datomic": "ami-04d09a0a833d508eb", "Bastion": "ami-073e038edb8c675b6"},
"eu-central-1": {"Datomic": "ami-07b3154c0242f0e87", "Bastion": "ami-04e29a583e47d8b80"},
"ap-southeast-1": {"Datomic": "ami-0e4afb22a156fdbac", "Bastion": "ami-018ec097e75c16803"},
"ap-southeast-2": {"Datomic": "ami-023f62ca869bf17a2", "Bastion": "ami-09360d51b0aa43a1b"},
"ap-northeast-1": {"Datomic": "ami-0bc5ca724bf8882b5", "Bastion": "ami-0bcc43206700dc511"}
}
#2019-12-1315:29onetomThat's not a very immutable move! ;D#2019-12-1315:33marshallUnfortunately we don’t have any control over the Marketplace listing or when/how stuff gets dated there#2019-12-1315:34marshallthe Datomic docs releases page will always be the official record of what releases are available and will have links to the separate system CFTs#2019-12-1315:34marshallin general, once you’ve subscribed to the product on the marketplace page, i would recommend you use the docs releases page from then on instead of going back to marketplace#2019-12-1315:36onetomthanks for the help!
that dec 5 last-modified date still doesn't make sense though.#2019-12-1315:37onetomi recorded all the details, in case it helps to make later releases less confusing.
i feel like i had bad luck and just tried things at a time when they were in flux.#2019-12-1315:37marshallprobably depends on the date we received notification from AWS that the release shipped and the date when we finished testing and actually ‘issued’ the release on our docs#2019-12-1315:38marshallthe objective is for any release listed as ‘current’ or ‘latest’ on the datomic docs release page to always work as would be expected#2019-12-1315:38marshallso if there are issues with the templates available directly from Marketplace we won’t post them to our releases page until those issues are resolved#2019-12-1315:39marshall(which, incidentally, is what happened in this last case)#2019-12-1315:42onetomthanks!
our team is getting more and more excited about datomic, despite these hiccups.
it took me a lot of explaining, but my efforts are starting to bear fruit! 🙂#2019-12-1315:43marshallGlad to hear it!#2019-12-1314:16marshallhttps://docs.datomic.com/cloud/releases.html#cli-0-9-33#2019-12-1317:32onetomI'm just going thru a solo stack deletion process by following https://docs.datomic.com/cloud/operation/deleting.html
i think examples like this:
aws --region (Region) application-autoscaling deregister-scalable-target --service-namespace dynamodb --scalable-dimension dynamodb:table:WriteCapacityUnits --resource-id table/datomic-(System)
aws --region (Region) application-autoscaling deregister-scalable-target --service-namespace dynamodb --scalable-dimension dynamodb:table:ReadCapacityUnits --resource-id table/datomic-(System)
could be better written as:
/usr/bin/env REGION="<region>" SYSTEM="<system>" \
bash -xc 'for dimension in Read Write; do aws --region $REGION application-autoscaling deregister-scalable-target --service-namespace dynamodb --scalable-dimension dynamodb:table:${dimension}CapacityUnits --resource-id table/datomic-$SYSTEM; done'
so the parts which need replacement are factored out to the beginning and will only need to be replaced once.
while the for loops makes it slightly complicated, it also highlights the intent better.
it was not obvious to spot that few letter difference between the 2 commands on the website, where the difference was even off screen...
or a compromise:
REGION="<region>"
SYSTEM="<system>"
aws --region $REGION application-autoscaling deregister-scalable-target --service-namespace dynamodb --scalable-dimension dynamodb:table:WriteCapacityUnits --resource-id table/datomic-$SYSTEM
aws --region $REGION application-autoscaling deregister-scalable-target --service-namespace dynamodb --scalable-dimension dynamodb:table:ReadCapacityUnits --resource-id table/datomic-$SYSTEM
although this only works under bash , zsh and alikes, while the one using env works under fish or anything really, too.#2019-12-1318:50joshkhis there a way to retract all values of a :db.cardinality/many attribute, or must i retract each value individually? something like:
[:db/retract eid :recipe/ingredients] ; throws exception
instead of
[[:db/retract eid :recipe/ingredients "milk"]
[:db/retract eid :recipe/ingredients "sugar"]
[:db/retract eid :recipe/ingredients "eggs"]]
#2019-12-1319:50Joe Lane@joshkh You must do the latter#2019-12-1400:25joshkhi feared as much, only because i'm lazy. 🙂 thanks @lanejo01 for confirming.#2019-12-1401:31shaun-mahood@joshkh I believe there’s something in the “Understanding & Using Reified Transactions” presentation at https://docs.datomic.com/on-prem/videos.html - haven’t watched it for a while though, so it may only apply to on-prem (or I might be mixing up videos - but that’s a great one regardless).
#2019-12-1405:15fjolne@joshkh there’s a nice collection of db fns, which includes fns for reseting to-many rels: https://github.com/vvvvalvalval/datofu#2019-12-1405:18fjolneThose use entity API, but are actually quite easy to rewrite with pull API (had to do this even on on-prem, cuz entity API introduced some subtle bugs for our case).#2019-12-1416:22mssis there a query explorer-type interface for datomic cloud deployments?#2019-12-1420:28val_waeselynckI don't think so unfortunately, but note that REBL might give you a lot of the same value prop.#2019-12-1422:32joshkhi wrote a little personal-use webapp to interactively explore my data, and planned to release it after christmas along with a guide on deploying containerised datomic cloud apps on AWS. maybe you can help me test it? 🙂#2019-12-1422:35joshkhit's a tree-like browser and (so far) supports transacting new values with in-place editing#2019-12-1422:35joshkh#2019-12-1416:27Joe Lane@mss Can you elaborate on what you mean by "query explorer"?#2019-12-1416:34msshttps://docs.datomic.com/on-prem/console.html#2019-12-1416:34mssconsole, I guess I should say#2019-12-1421:33onetomGot this error when I was upgrading a Solo system from 535-8812 to the latest (https://s3.amazonaws.com/datomic-cloud-1/cft/569-8835/datomic-solo-compute-569-8835.json)
Export with name <datomic-system-name>-CodeBucketPolicyArn is already exported by stack <datomic-system-name>-Compute-1P5MYAP35641W
i have no idea what does it mean.
i just deleted a system in the same region earlier and kept the code bucket as documented on the https://docs.datomic.com/cloud/operation/deleting.html page.
i suspect it has something to do with this specific situation, because that stack deletion didn't go completely flawlessly.
it was not able to delete a security group because it couldn't delete the ENIs it was referring to, so i had to manually delete them.
for now i will just tear the stack down and try to pull up the latest one and reuse the existing storage.
at least i will exercise how to do this...#2019-12-1422:06Ike MawiraHello, i would like to ask if the client pro version should match the specific version of datomic downloaded on my machine. I heard so in a tutorial while i was using datomic-free. Now i have downloaded datomic starter version 0.9.5981 but it seems that the latest version of client pro offered by Maven is 0.9.41.
I am getting an error while setting up a repl in intellij and I am not sure if that is the problem.
Dec 15, 2019 12:51:16 AM org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector createConnection
ERROR: AMQ214016: Failed to create netty connection
java.nio.channels.ClosedChannelException
at io.netty.handler.ssl.SslHandler.channelInactive(...)(Unknown Source)
#2019-12-1422:21Ike MawiraIts working, seems like i had imported [datomic.api] instead of [datomic.client.api] for my case.#2019-12-1422:10joshkh@shaun-mahood thanks for the video link, i'll check it out. and thanks to you too, @fjolne.yngling. transaction functions are the way forward.#2019-12-1422:17joshkhout of curiosity, do folks here ever find themselves battling the 120 second sync-libs error when deploying Ions with "large" dependencies -- for example, the AWS-SDK or some Apache java library?#2019-12-1516:37chris_johnsonI have not encountered that problem but it sure sounds to me like a good use case for incorporating AWS Lambda Layers (https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html). I imagine that might run into trouble in that it would likely need to be the Cloud instance’s …instance… of Lambda the Ultimate that actually held the Layer refs, so if you had two Ions that wanted incompatible versions of the AWS SDK for example, that would hurt#2019-12-1516:39chris_johnsonBut I could definitely see a future state where Datomic Cloud exposes the machinery of Layers to help you slim down the dependency payload for a given Ion, it would just need to be thought through well (and for all I know, correctly incorporated with existing use of Layers - I have no visibility into how Lambda the Ultimate or Ions are actually implemented today)#2019-12-1517:01kennyIons don’t deploy to Lambdas. I think they already do an intelligent diff of dependencies, only uploading the ones that changed. It sounds like joshkh may want to increase a timeout of some kind. #2019-12-1517:22chris_johnsonYou’re correct, it was me who needed more time to think things through well. 🙂#2019-12-1518:50joshkhthanks for the input @chris_johnson. one can never know too much about lambdas, and i will explore layers for sure. @kenny is right though - I'm not using lambdas at the moment, and i don't know of a way to increase this timeout.#2019-12-1519:42dominicmException: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "PoolCleaner[2018699554:1576421257813]"
Exception in thread "Thread-4 (ActiveMQ-client-global-scheduled-threads-270014218)" java.lang.OutOfMemoryError: Java heap space
Exception in thread "Thread-2 (ActiveMQ-client-global-scheduled-threads-270014218)" java.lang.OutOfMemoryError: Java heap space
I'm seeing this, does it mean that my query will never return? I'm doing some big queries. Will it recover?#2019-12-1519:45onetomim trying to make an ion, but im getting back base64 encoded responses.
is there a way to get non-encoded responses somehow?
(ns hodur-example-app.core)
(defn debug [payload]
{:status 200
:headers {"Content-Type" "text/plain"}
:body "debug"})
(def debug-ion (api-gateway/ionize debug))
datomic/ion-config.edn:
{:allow [hodur-example-app.core/debug-ion]
:lambdas {:debug-ion
{:fn hodur-example-app.core/debug-ion
:integration :api-gateway/proxy}}
:app-name "enterprise-sandbox"}
aws lambda invoke --function-name enterprise-sandbox-compute-debug-ion /dev/stdout:
{"statusCode":200,"headers":{},"body":"ZGVidWc=","isBase64Encoded":true}{
"ExecutedVersion": "$LATEST",
"StatusCode": 200
}
where ZGVidWc= is indeed the expected debug response:
$ echo 'ZGVidWc=' | base64 -d
debug
#2019-12-1519:53onetomand if i expose the lambda via an API gateway, i do get the base64 content still:
$ curl -d ''
ZGVidWc=
#2019-12-1601:39onetomI guess I'm doing this wrong, according to
https://docs.datomic.com/cloud/ions/ions-reference.html#lambda-ion
I must return a string if I'm exposing some fn as lambda.
Maybe the hodur-example-app is a bit obsolete?
https://github.com/hodur-org/hodur-example-app#2019-12-1603:40onetomi've tried the ion-starter project too and that also returns base64 response:
$ curl -s -d ':shirt'
W1sjOmludns6c2t1ICJTS1UtMjgiLCA6c2l6ZSA6eGxhcmdlLCA6Y29sb3IgOmdyZWVufV0KIFsjOmludns6c2t1ICJTS1UtMzYiLCA6c2l6ZSA6bWVkaXVtLCA6Y29sb3IgOmJsdWV9XQogWyM6aW52ezpza3UgIlNLVS00OCIsIDpzaXplIDpzbWFsbCwgOmNvbG9yIDp5ZWxsb3d9XQogWyM6aW52ezpza3UgIlNLVS00MCIsIDpzaXplIDpsYXJnZSwgOmNvbG9yIDpibHVlfV0KIFsjOmludns6c2t1ICJTS1UtMCIsIDpzaXplIDpzbWFsbCwgOmNvbG9yIDpyZWR9XQogWyM6aW52ezpza3UgIlNLVS01MiIsIDpzaXplIDptZWRpdW0sIDpjb2xvciA6eWVsbG93fV0KIFsjOmludns6c2t1ICJTS1UtMTIiLCA6c2l6ZSA6eGxhcmdlLCA6Y29sb3IgOnJlZH1dCiBbIzppbnZ7OnNrdSAiU0tVLTQ0IiwgOnNpemUgOnhsYXJnZSwgOmNvbG9yIDpibHVlfV0KIFsjOmludns6c2t1ICJTS1UtMTYiLCA6c2l6ZSA6c21hbGwsIDpjb2xvciA6Z3JlZW59XQogWyM6aW52ezpza3UgIlNLVS02MCIsIDpzaXplIDp4bGFyZ2UsIDpjb2xvciA6eWVsbG93fV0KIFsjOmludns6c2t1ICJTS1UtNCIsIDpzaXplIDptZWRpdW0sIDpjb2xvciA6cmVkfV0KIFsjOmludns6c2t1ICJTS1UtMzIiLCA6c2l6ZSA6c21hbGwsIDpjb2xvciA6Ymx1ZX1dCiBbIzppbnZ7OnNrdSAiU0tVLTI0IiwgOnNpemUgOmxhcmdlLCA6Y29sb3IgOmdyZWVufV0KIFsjOmludns6c2t1ICJTS1UtMjAiLCA6c2l6ZSA6bWVkaXVtLCA6Y29sb3IgOmdyZWVufV0KIFsjOmludns6c2t1ICJTS1UtOCIsIDpzaXplIDpsYXJnZSwgOmNvbG9yIDpyZWR9XQogWyM6aW52ezpza3UgIlNLVS01NiIsIDpzaXplIDpsYXJnZSwgOmNvbG9yIDp5ZWxsb3d9XV0K
then i guess the problem might be how i setup the api gw 😕#2019-12-1603:50onetomi think i've found the missing step in the docs:
https://docs.datomic.com/cloud/ions/ions-tutorial.html#org6d06b38
i had to set all (`*/*`) content types to be treated as binary#2019-12-1611:42geodromeI am working through day-of-datomic-cloud. Im on tutorial/constructor.clj. When I try to execute the following line:
(d/with (d/with-db conn) {:tx-data [{:user/email "
https://github.com/cognitect-labs/day-of-datomic-cloud/blob/master/tutorial/constructor.clj#L37
I get an error:
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).'datomic/ion-config.edn' is not on the classpath
`
I had no issues with several previous tutorials. I am using Cursive with IntelliJ. The REPL is configured to ‘Run with Deps.’ I restarted the REPL and stuff like that. My deps.edn includes “resources” under the :paths key. And the file datomic/ion-config.edn is there in the resources directory.
I suspect this has something to do with the datomic/ion-config.edn being unavailable in the cloud instance. Am I on the right track? I did not configure anything pertaining to ions on the cloud instance.#2019-12-1612:22daniel.spanielIs there a way to set the blank > < in a variable passed to query. I am doing a query where if the I am filtering on values in a field, but the filter might be empty so this case I want to get any and all values for that field . Was trying to pass in a variable like ' or ` or '' or "" ( the blank ) if the value was nil but to no avail.#2019-12-1612:22daniel.spaniel(d/q '[:find (pull ?e pattern)
:in $ pattern ?customer-id
:where
[?e :invoice/customer ?customer-id]]
db '[*] customer-id)
so for example the customer-id might have an id or might be nil ( but I can't use nil and need a blank )#2019-12-1614:11Joe Lane@dansudol The query is a datastructure so you can construct it on the fly using a cond depending on customer-id's presence then conj either [?e :invoice/customer ?customer-id] or [?e :invoice/customer]. I wouldn't use an if because filters always seem to expand the cases they handle.#2019-12-1614:23favilaanother option is a rule which uses a sentinel#2019-12-1614:49daniel.spanielthanks @lanejo01 I did not know I could use cond .. good idea .. not sure what a sentinel is @favila#2019-12-1614:53favilaA value you pick to represent “no filter” which is not in the space of matchable values#2019-12-1615:05favila'[[(invoice-with-customer ?e ?cust)
[(!= ?cust :any)]
[?e :invoice/customer ?cust]]
[(invoice-with-customer ?e ?cust)
[(= ?cust :any)]]
]#2019-12-1615:05favila(for e.g.)#2019-12-1615:05faviladatalog refuses nil values#2019-12-1615:06daniel.spanieli know .. kind of tricky#2019-12-1615:06daniel.spanieli see your query .. i just don't get it .. hard to grok#2019-12-1615:07daniel.spanielis invoice-with-customer an on the fly function you are defining ?#2019-12-1615:08favilaits a rule#2019-12-1615:09favilahttps://docs.datomic.com/on-prem/query.html#rules#2019-12-1615:14daniel.spanielwhoa .. this is fancy .. very interesting too .. trying this out#2019-12-1614:55daniel.spaniel@lanejo01 could you do me a favour and write that query with the cond .. i was hacking around and could not get it#2019-12-1614:56Joe LaneSure, hang on.#2019-12-1615:10Joe Lane(defn customers
[db {:keys [customer-id] :as arg-map}]
(let [the-query (cond-> {:query {:find ['(pull ?e pattern)]
:in ['$ 'pattern]
:where []}
:args [db '[*]]}
customer-id (->
(update-in [:query :where] conj '[?e :invoice/customer ?customer-id])
(update-in [:query :in] conj '?customer-id)
(update-in [:args] conj customer-id))
(nil? customer-id) (-> (update-in [:query :where] conj '[?e :invoice/customer])))]
(d/q the-query)))#2019-12-1615:10Joe LaneI think that should work. You might want to extract the right hand side of the first cond-> clause to be its own fn, but I thought I'd include it all in one place for now.#2019-12-1615:14daniel.spanielahh .. update-in .. ok .. very clever#2019-12-1616:28daniel.spanielturned out to be super whacky .. ( i have way more clauses and variable ) BUT .. that sucker worked .. i am well shocked .. and thanks for the days surprise of whacky code .. very interesting#2019-12-1617:03Joe LaneI use an extended variation of this with about 30 small clauses to construct and expose a domain specific pseudo-sql query language (in json!) to the mobile developer on one of my projects. They love it and have implemented several features without even talking to me about it. It's pretty cool 🙂
I use the cond-> pattern whenever I have possibly nillable values and I'm constructing queries/tx-data. It's one of my top 5 fav language tools.
Glad the above was helpful.#2019-12-1617:13daniel.spanielsuper nifty .. thanks again 🙂#2019-12-1619:03rgorrepatisala2018!#2019-12-1620:46aisamuYou might want to change your password :P#2019-12-1620:58rgorrepatioops!#2019-12-1621:37aisamu(Jokes aside, please be aware that we have at least 2 public logs of these channels 😬)#2019-12-1623:35kennyFor those working with Datomic Cloud, here's a short script that will automatically delete the durable storage for you so you don't have to go through the manual steps listed in the docs. https://github.com/ComputeSoftware/datomic-cloud-tools. We've found this useful to integrate with our infra-as-code tools and just generally making spinning up and down Datomic systems a bit easier.#2019-12-1717:24alidlorenzoif we follow the datomic ion tutorial we're interacting/changing with live database, correct? so we should delete it afterwards in order to start from fresh slate (since there's no way to rollback changes)?#2019-12-1717:26alidlorenzoalso, does starting/stopping a datomic-gateway effect billing? (so i shouldn't forget to stop it after developing?) there's not a lot of info on implications of these commands in docs#2019-12-1717:43shaun-mahood@alidcastano Yes, the ion tutorial transacts data to a live database. If you want to start over, deleting and recreating the database shouldn't cause any problems.
The API gateways are billed based on the number of requests, so you should only see a cost if you are using it a lot (mine is $3.50/million requests, not going to break the bank from development use).#2020-02-1313:26pfeodrippe@U05120CBV thank you!#2020-02-1214:50TyIs datomic cloud the only way to use datomic?#2020-02-1214:52ghadino, there are two products: Cloud & On-Prem#2020-02-1214:52ghadiif you go to http://datomic.com and click on Products or peruse some docs, you can see differences between the two#2020-02-1214:53ghadithere's also some comparison here https://docs.datomic.com/on-prem/moving-to-cloud.html#2020-02-1214:57TyThanks! Much appreciated#2020-02-1217:58Sam DeSotaFor non-breaking schema updates, is it fine to re-transact my entire apps schema idents every time I start an app instance? Could this cause any issues down the road?#2020-02-1218:00ghadi1) don't do anything except non-breaking schema growth
2) yeah it's unnecessary and can probably cause issues, especially when you have many app instances#2020-02-1218:04Sam DeSotaRight, everything is non-breaking, I just put that qualifier in there for clarification.
Didn't know if datomic only transacted changes anyway, I guess I'll have to write some sort of diffing engine, manually syncing the schema on each change doesn't work for my use case.
Thank you.#2020-02-1218:06ghadidont need a diffing engine, just query#2020-02-1218:07ghadi(set (map first (d/q '[:find ?name :where [?e :db/ident ?name] [?e :db/type]] db)))#2020-02-1218:07ghadithen transact all the stuff that isn't in the set#2020-02-1218:07ghadi(schema is not special, they're just ordinary entities)#2020-02-1218:08Sam DeSotaGot it, yeah I guess I only need to diff on the top level items, no need for a deeper tree diff update. Thanks again!#2020-02-1218:09ghadino problem... what does "top-level" mean?#2020-02-1218:09ghadiyou mean the existence of the attribute itself?#2020-02-1218:10Sam DeSotaI mean for example.. updating an existing entity from non-component entity to component entity, I can just re-transact the entire ident, instead of comparing all the individual db entity properties and updating only those that changed#2020-02-1218:12ghadiyeah, understood. Another approach is to transact collections of schema and mark in the tx metadata something identifying the collection itself#2020-02-1218:12ghadithat way one collection can make schema, then a later collection can update some part of it#2020-02-1218:13ghadithat way instead of ensuring that attr :patient/id exists, you can ensure that 20200212-patient-stuff.edn is in the database#2020-02-1218:14ghadithen you can add 2021-more-patient-stuff.edn later#2020-02-1218:14ghadiwe use an attribute called :migration/file to tag collections of schema in this way#2020-02-1218:19Sam DeSotaGot it, yeah using more traditional migration. I do like the first approach however, since all changes to Datomic are non-breaking anyway, I can use a declarative format agnostic to datomic for my schema, then generate the datomic idents, as well as schemas for other purposes ex: GraphQL from that source. It's been super helpful for rapidly building out internal apis without the need to duplicate schema information everywhere.#2020-02-1218:02ghadiyou can always query then transact only what's missing#2020-02-1321:33jarethttps://forum.datomic.com/t/datomic-0-9-6045-now-available/1360#2020-02-1322:37joshkhi'm having trouble requiring com.datomic/client-cloud {:mvn/version "0.8.81"}:
Clojure 1.10.1
(require '[datomic.client.api :as d])
Execution error - invalid arguments to datomic.client.api/loading at (api.clj:16).
:as - failed: #{:exclude} at: [:exclude :op :quoted-spec :spec]
:as - failed: #{:only} at: [:only :op :quoted-spec :spec]
:as - failed: #{:rename} at: [:rename :op :quoted-spec :spec]
(quote :as) - failed: #{:exclude} at: [:exclude :op :spec]
(quote :as) - failed: #{:only} at: [:only :op :spec]
(quote :as) - failed: #{:rename} at: [:rename :op :spec]
the previous version com.datomic/client-cloud {:mvn/version "0.8.78"} seems fine:
Clojure 1.10.1
(require '[datomic.client.api :as d])
=> nil
#2020-02-1322:39ghadi@joshkh can you paste your whole clojure -Stree ?#2020-02-1322:39ghadiCmd-Shift-Enter will open up the snippet paste in slack#2020-02-1322:41ghadiyou can redact whatever proprietary stuff you have#2020-02-1322:43joshkhclj -Adev -Stree#2020-02-1322:48ghadiare you running a repl with clj/ clojure ? I don't see the repl prompt appear before the require#2020-02-1322:48ghadiclojure -Sdeps '{:deps {com.datomic/client-cloud {:mvn/version "0.8.81"}}}'
Clojure 1.10.1
user=> (require '[datomic.client.api :as d])#2020-02-1322:48ghadiyou should try running ^ outside your project#2020-02-1322:49joshkhsorry, used clojure -Adev -Stree in my code paste. i'll try your next suggestion now.#2020-02-1322:50joshkh$ clojure -Sdeps '{:deps {com.datomic/client-cloud {:mvn/version "0.8.81"}}}'
Clojure 1.10.1
user=> (require '[datomic.client.api :as d])
Execution error - invalid arguments to datomic.client.api/loading at (api.clj:16).
:as - failed: #{:exclude} at: [:exclude :op :quoted-spec :spec]
:as - failed: #{:only} at: [:only :op :quoted-spec :spec]
:as - failed: #{:rename} at: [:rename :op :quoted-spec :spec]
(quote :as) - failed: #{:exclude} at: [:exclude :op :spec]
(quote :as) - failed: #{:only} at: [:only :op :spec]
(quote :as) - failed: #{:rename} at: [:rename :op :spec]
user=>
#2020-02-1322:50ghadijust to confirm, can you paste your -Stree again but using ^ outside your project?#2020-02-1322:50joshkhwait, outside of my project it works#2020-02-1322:50ghadiyeah that's what I suspected#2020-02-1322:51joshkhdid i break my toys?#2020-02-1322:51ghadiwithout knowing what -Adev is doing, it's hard to say#2020-02-1322:51joshkh:aliases {:dev {:extra-deps {com.datomic/client-cloud {:mvn/version "0.8.81"}
com.datomic/ion {:mvn/version "0.9.35"}
com.datomic/ion-dev {:mvn/version "0.9.251"}}}}
#2020-02-1322:53ghadiyou have some stray directories / AOT in your main project?#2020-02-1322:54ghadiwhat's in :paths#2020-02-1322:54ghadi(just a lark...)#2020-02-1322:54joshkhjust some benign files in /resources. :paths ["src/clj" "resources"]#2020-02-1322:54ghadiclass files?#2020-02-1322:56joshkhnope!#2020-02-1322:56joshkhi'm stumped 🙂#2020-02-1322:57ghadidid you follow these instructions: https://docs.datomic.com/cloud/operation/howto.html#ion-dev ?#2020-02-1322:57ghadithey changed recently#2020-02-1323:04joshkhyup, and no dice:
(ns genpo.client
(:require [datomic.client.api :as d]))
Clojure 1.10.1
Loading src/clj/genpo/client.clj...
Syntax error (ExceptionInfo) compiling at (src/clj/genpo/client.clj:1:1).
Call to clojure.core/refer-clojure did not conform to spec.
#2020-02-1323:19ghadiif it works outside your project @joshkh I'd try to debug your project classpath#2020-02-1323:19ghadican't repro it over here#2020-02-1323:20joshkhyou got it. thanks @ghadi for your help!#2020-02-1323:20ghadiand your deps looked correct#2020-02-1323:20ghadiion-dev should definitely be in .clojure/deps.edn#2020-02-1323:20joshkhyes - i've moved it there. thanks for the tip.#2020-02-1323:20ghadinp#2020-02-1403:23Sam DeSotaI'm starting to look into full-text search with cloud. I'd like to use a full text search database hosted in the datomic VPC, and sync data via the the log api if that's reasonable. Ideally, I could keep the full-text search in sync with the datomic relatively quickly (< 30s), is there any resources or directions anybody could point me in for working on this?#2020-02-1404:50emI’m really interested in the same thing, and been meaning to build out this functionality in the near future. I believe previous discussion mentioned quite a few people doing it with ElasticSearch, and somewhere (I think?) it was officially suggested to use AWS CloudSearch. The basic implementation idea would probably be sipping the transaction log and publishing directly to the search solution, so I’m pretty sure syncing should be much faster than 30s given the reactive ideas built into datomic#2020-02-1417:32joshkhCan you elaborate on the reactive ideas built into datomic? I thought sipping the transaction log would be more akin to polling the log every n seconds in a loop.#2020-02-1418:09Sam DeSotaRight, that's what I'm curious about. I've seen a couple of examples of a utilizing a polling thread with ions to do subscriptions, but if there's some way to get a message queue of all Datomic transactions, that would be ideal#2020-02-1419:22emAhhh, yeah I think I mixed up on-prem and cloud, been watching too many old Rich Hickey Datomic videos for my source of datomic truth and not so much the documentation. 😛
I was thinking of tx-report-queue which was a big idea in prem for the Peers, which was understandably not supported in the Client API for cloud. Now that Ions are out though and there must be some kind of internal implementation to keep the transaction log synced across query groups, is there a way to access this API?
There’s this old slack conversation https://clojurians-log.clojureverse.org/datomic/2018-06-27 between @U09FEH8GN and @U072WS7PE where it was mentioned @currentoor we certainly understand the value of tx-report-queue, and will probably do something similar (or better) in Cloud at some point. That said, you should plan and build your app around what exists today. Was wondering if I missed an update since, or what kinds of workarounds “build your app around what exists today” that people have found to work for them?#2020-02-1419:37favilapolling#2020-02-1415:13Joe LaneFor both of you asking about search I'm curious, what is the expected size of the ES Cluster you will be running in the VPC?#2020-02-1418:05Sam DeSotaFor me, just used for a product database of about 100,000 user-generated products (title, description, tags) and an orders / customer database of ~5000 orders a month, just for admin tasks.#2020-02-1418:05Sam DeSotaRunning in the VPC#2020-02-1417:30joshkhI want to upsert two entities with tuples in the same transaction, where the second entity's tuple references the first entity. An initial transaction works as expected:
(d/transact db
{:tx-data [{:db/id "entitya"
:feature/id "SomeId123"
:feature/type "Gene"
:feature/type+id ["Gene" "SomeId123"] ; <-- tuple for upsert
}
{:db/id "entityb"
:attribute/view "Gene.primaryIdentifier"
:attribute/value "MC3R"
:attribute/feature "entitya" ; <-- ref back to entitya temp-id from above
:attribute/feature+view+value ["entitya" "Gene.primaryIdentifier" "MC3R"]}]})
=> success
But transacting the same tx-data again throws a Unique conflict caused by the second entity, even though I'm including the tuple attribute value (albeit a temporary id):
(d/transact (client/get-conn)
{:tx-data ...same as above})
Unique conflict: :attribute/feature+view+value, value: [47257009761812574 "Gene.primaryIdentifier" "MC3R"] already held by: 27971575810621538 asserted for: 31454828647415908
Should I expect a successful upsert here?
• Edit - I'm on the latest version of Datomic Cloud 8846 and client 0.8.81#2020-02-1419:31favilaOn-prem has the same behavior. I too am curious if this is by design because one of our desired use cases for composite tuples was to have upserting composites.#2020-02-1419:33favilaYou can use them as upserting only if you explicitly assert the final value of the composite in the transaction. composites don’t seem to be consulted for upserting-tempid resolution, even if no part of the composite involves a ref#2020-02-1419:35favilae.g. transacting {:a eid-of-x :b eid-of-y} with a defined composite upsert attr defined of :a+b may also produce a datom conflict instead of upserting#2020-02-1419:36favilainstead we have to do {:a eid-of-x :b eid-of-y :a+b [eid-of-x eid-of-y]} always. And we can’t use tempids or lookup refs for eid-of-x or eid-of-y , only raw entity ids#2020-02-1517:30daniel.spanielCurious to know if a regex query in the where clause like this#2020-02-1517:30daniel.spaniel[?e :invoice/number ?number]
[(re-find ?regex ?number)]
#2020-02-1517:30daniel.spanielis supported .. my regex is like #"(?i)blah"#2020-02-1517:31daniel.spanielthis works in the in memory datomic but is barfing in datomic cloud ( ion )#2020-02-1517:37daniel.spanielthe error is
Not supported: class java.util.regex.Pattern
#2020-02-1522:52steveb8n@dansudol try putting the Pattern class in the :allow values of ion-config.edn#2020-02-1523:04daniel.spanielwill do @steveb8n good idea#2020-02-1608:39odieHi all,
One of the queries I’m trying to run seemed “slow”, at around 2+ seconds
on my local machine.
The query is just trying to match on an exact value for an attribute. It looks something like this:
'[:find [(pull ?e [*]) ...]
:in $ ?target-val
:where [?e :some-attr ?target-val]]
It turns out, there are ~10M entities with this particular attribute. I then tried to speed this up by asking for the attribute to be indexed.
The attribute was updated to look like:
{:db/ident :some-attr
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/index true}
However, the query speed did not change.
I then tried looking things up directly through the AVET index like so.
(d/datoms db :avet :some-attr "12345")
This results in an error. Saying the attribute isn’t indexed.
Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:79).
:db.error/attribute-not-indexed attribute: :some-attr is not indexed
I also tried running
(d/request-index (db/get-conn)) ;=> true
and
(deref (d/sync-index conn (d/basis-t (d/db conn)))) ;=> blocks forever
I’m pretty lost on how to go about getting the index built.
Is there something obvious I’m missing here?
I’m running 0.9.6014 btw.#2020-02-1614:05favilaYou are on the right track but indexing isn’t instant#2020-02-1614:06favilatransactor logs and metrics will show if it’s actively indexing#2020-02-1614:07favilaUse a basis t exactly corresponding to the t when you issued request-index (or a little older if you are unsure) to your d/sync-index call#2020-02-1614:07favilaIf it’s still blocking, then you are definitely still indexing#2020-02-1614:08favilaYou just need to wait for it to finish#2020-02-1614:08favilaAnd you can confirm by looking at tx logs#2020-02-1706:37odie@U09R86PA4 I left everything running over night. The index still isn’t available.
Digging a bit in the log, I found that whenever I ran
(d/request-index (db/get-conn))
The following lines would soon show up in the log file:
2020-02-17 14:19:10.773 WARN default o.a.activemq.artemis.core.server - AMQ222165: No Dead Letter Address configured for queue admin.request5e4a305e-a0cd-4a28-a539-6e209b4d53ec in AddressSettings
2020-02-17 14:19:10.778 WARN default o.a.activemq.artemis.core.server - AMQ222166: No Expiry Address configured for queue admin.request5e4a305e-a0cd-4a28-a539-6e209b4d53ec in AddressSettings
2020-02-17 14:19:10.860 WARN default o.a.activemq.artemis.core.server - AMQ222165: No Dead Letter Address configured for queue admin.response5e4a305e-bd13-4fe6-905e-d3a0a0ace172 in AddressSettings
2020-02-17 14:19:10.860 WARN default o.a.activemq.artemis.core.server - AMQ222166: No Expiry Address configured for queue admin.response5e4a305e-bd13-4fe6-905e-d3a0a0ace172 in AddressSettings
2020-02-17 14:19:10.909 INFO default datomic.update - {:event :transactor/admin-command, :cmd :request-index, :arg "stock-insight-d1b785f9-4994-485c-960f-45cc94ebced8", :result {:queued "stock-insight-d1b785f9-4994-485c-960f-45cc94ebced8"}, :pid 97181, :tid 97}
2020-02-17 14:19:11.328 INFO default datomic.update - {:index/requested-up-to-t 25104804, :pid 97181, :tid 54}#2020-02-1706:37odieI’m guessing the last line means all the index has been brought up to t:25104804.#2020-02-1706:39odieIndexing on said field was enabled at t:25104802. So I guess that means the transactor thinks all index as been brought up to date?#2020-02-1706:42odieHowever, trying to use the AVET index fails in the same way.#2020-02-1706:43odieWould those activemq warnings be some indication that something isn’t working the way it is supposed to?#2020-02-1706:46odieAlso, I noticed that I was starting the transactor with java 11. I’ve since switched to running it with java 8. Could that have messed something up?#2020-02-1712:23favila The last line means the index has been requested up to that T, not that it is finished#2020-02-1712:24favilaTry sync-index with that t. If it blocks, you are not finished#2020-02-1712:25favilaCognitect says they support java 11 :man-shrugging: #2020-02-1712:25favilaI think the activemq warnings are red herrings#2020-02-1611:47daniel.spaniel@steveb8n, that did not work .. seems like
java.util.regex.Pattern
#2020-02-1611:48daniel.spanielis not allowed in the :allow section of the ion-config.edn file#2020-02-1614:04favilaCould this be a serialization issue? Try using a string for the re and constructing a pattern early in the query with re-pattern#2020-02-1621:30steveb8nIn mine I have just the fn from the namespaces I need e.g. clojure.string/starts-with?#2020-02-1621:34steveb8nsince Pattern in java interop, you might create your own fn with Pattern inside and “allow” that instead#2020-02-1623:19DaoudaHey folks,
Let say I have an entity with an attr called :entity/hash and more than one entity may share de same hash value .
Now I want to perform a query where I will pass a list of hash hash1 hash2 hash3 and the query will return a tuple of hash count-of-entity-with-the-same-hash like this: [hash1 3 hash2 5 hash3 80] or {hash1 3 hash2 5 hash3 80}
Don’t want to perfom that at application level. I want the query to give me back that result. Is it possible and how can I achieve that?
Snippet code will be very welcome 😄#2020-02-1706:24pithylessUnless I misunderstood the question, this sounds like an aggregate count query: https://docs.datomic.com/cloud/query/query-data-reference.html#built-in-aggregates
[:find ?hash (count ?hash)
:in [?hash ...]
:with ?entity
:where [?entity :entity/hash ?hash]]
#2020-02-1723:07Daoudaactually you go it right, thank you very much 😄#2020-02-1707:21emAre there any other UI options for Datomic Cloud other than REBL? I’d love to introduce non-clojure team members to Datomic to get them excited and have them try out queries, and the old Datomic Console for on-prem looked perfect, but sadly doesn’t seem to be available for cloud(?). Have people built any solutions for this need?#2020-02-1712:10joshkhis there a way to configure a username and password for the Presto server built into the Datomic Cloud access gateway?#2020-02-1715:05souenzzo[on-prem] There is guidelines about datomic and core.async?
I need to run queries inside a go-block#2020-02-1715:18favilaqueries are blocking work, which you shouldn’t do in a go-block generally#2020-02-1715:19favilaunless you know they will complete very quickly?#2020-02-1715:38souenzzoI already locked all my threads due quering inside go-block 😕#2020-02-1715:38souenzzoI know, I can create a thread pool and bla bla bla
But it can't be delivered as a library, like datomic.client.api.async#2020-02-1715:56souenzzo@U04VDQDDY I remember that sometime ago you asked about connect datomic-client-pro in a datomic:mem conn
Did you end up with some solution?#2020-02-1716:02mfikes@U2J4FRT2T that must have been someone else with that issue#2020-02-1716:04maxtUsing clojure spec for datomic entity spec seems like a good idea. Is there any reason why it might not? I can't find it being mentioned anywhere.
Doing so would make it easy to use the same verification code client side and server side. The possible downside I can think of is that it might be a bit of overhead.
Something along the lines of:
(s/def :user/phone-number (s/and string? #(re-matches #"\+[0-9 +-]+" %)))
(s/def :user/uuid uuid?)
(s/def :example/user (s/keys :req [:user/uuid] :opt [:user/phone-number])
(in-ns 'example)
(defn user? [db eid]
(let [user (d/pull db '[*] eid)]
(if (s/valid? :example/user user)
true
(s/explain-str :wavy/user user))))
;; datomic schema
{:db/ident :example/user
:db.entity/preds example/user?}
#2020-02-1718:21Luke Schubertis there anyway to mass transact data?#2020-02-1718:32favilaWhat do you mean?#2020-02-1718:32favilaDo you mean this? https://docs.datomic.com/on-prem/best-practices.html#pipeline-transactions#2020-02-1718:34Luke SchubertI'm already pipelining, I was just wondering if there was some functionality to run a large batch of transactions#2020-02-1718:37Luke SchubertI wasn't able to find anything, but I figured I would ask here in the event that I was just failing to search properly#2020-02-1718:52favilaI’m still not sure what you want more than running transactions repeatedly#2020-02-1718:54favilathere are some tricks to improve “import” transaction cost or performance if that’s what you’re looking for?#2020-02-1718:54favilalike performing the import locally on large instances with SSDs#2020-02-1718:55favilaincreasing the index threshold#2020-02-1718:55favilanot indexing any attrs until the end of the import#2020-02-1721:37joshkhso this is interesting! when using the standalone pull syntax with [:db/id] as the selector, the results include reverse reference attributes with a '... symbol value. is this expected?
(d/pull db [:db/id] 1234567891234)
=>
{:attribute/a "value-a"
:attribute/b "value-b"
:attribute/_c ...
:attribute/_d ...}#2020-02-1721:47favilaI can’t reproduce this#2020-02-1721:47favilaI definitely don’t consider this expected#2020-02-1721:48favilacan you provide some more context?#2020-02-1721:50Alex Miller (Clojure team)are you sure that's not just repl printing?#2020-02-1722:06joshkhyup:
((juxt identity type) (:attribute/_c (d/pull (client/db) [:db/id] 1234567891234)))
=> [... clojure.lang.Symbol]
a coworker found this today, and i reproduced it on my machine (different IDEs and REPLs)#2020-02-1722:25joshkh> can you provide some more context?
DatomicCloudVersion 8846
com.datomic/client-cloud {:mvn/version "0.8.78"}
and nothing peculiar about our schema. the reverse reference attributes are legit, and for what it's worth non-component. pulling * on the same entity returns the same data but without reverse references (as expected).#2020-02-1722:40joshkhhas anyone here successfully integrated Datomic Cloud Analytics with a third party BI platform? we can successfully validate our Presto connection, but after selecting a Schema (db name) we get the error Query failed: Expected string for :db-name
someone else has the same problem and posted on the forums a few months ago without a resolution
https://forum.datomic.com/t/error-on-integration-between-datomic-analytics-and-power-bi/1266/2#2020-02-1723:10DaoudaHey folks, how retract impact datomic performance?
Make database read and write faster?
What about excision, does it has the same impact or different one?#2020-02-1820:55BrianDatomic Cloud instance suddenly giving
{
"errorMessage": "Connection refused",
"errorType": "datomic.ion.lambda.handler.exceptions.Unavailable",
"stackTrace": [
"datomic.ion.lambda.handler$throw_anomaly.invokeStatic(handler.clj:24)",
"datomic.ion.lambda.handler$throw_anomaly.invoke(handler.clj:20)",
"datomic.ion.lambda.handler.Handler.on_anomaly(handler.clj:171)",
"datomic.ion.lambda.handler.Handler.handle_request(handler.clj:196)",
"datomic.ion.lambda.handler$fn__3841$G__3766__3846.invoke(handler.clj:67)",
"datomic.ion.lambda.handler$fn__3841$G__3765__3852.invoke(handler.clj:67)",
"clojure.lang.Var.invoke(Var.java:399)",
"datomic.ion.lambda.handler.Thunk.handleRequest(Thunk.java:35)"
]
}
I've tried from lambda as well as bastion. The EC2 instance is up and running. It's been a while since I've touched this. If I redeploy master with no changes, will I lose the handles I have connecting api gateway and my lambda ions? I just need to get this service back up and running#2020-02-1821:11jaret@brian.rogers can you log a case by emailing <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>? I would like to gather more information on this failing service before offering concrete next steps and it would be best to share that information over a case.
Useful starting info:
-Cft version.
-solo or prod?
-other services are working?
-did you deploy before the error? Anything change before you saw this error?#2020-02-1821:17Brian@jaret I just redeployed master and it's fixed itself. Would it be useful for me to still submit a case to cognitect?#2020-02-1821:18BrianIf it helps: cft version I don't know (what is cft?), solo topology, we only are running that one datomic service so I couldn't test anything else, lat deployment was in the summer and it's been running ever since until a day or two ago#2020-02-1821:19jaretYes. I am very interested in tracking this down and would like to provide you potential steps to gather a thread dump should this issue occur again.#2020-02-1821:20jaretCFT = cloud formation template version, found in the outputs of your compute stack#2020-02-1821:22jaret@brian.rogers obviously no urgency on the case, but if you get a chance please do log one. If we have a bug here I’d like to address it.#2020-02-1821:24BrianSure thing!#2020-02-1822:53lilactownhas anyone used datascript as a client-side cache for datomic?#2020-02-1822:54Joe LaneI think you'd be surprised how different they are.#2020-02-1822:54Joe Lane(Meaning, I was)#2020-02-1822:59lilactownin that semantically they are too different for datascript to act as a cache that way?#2020-02-1823:00lilactownthe reason I’m asking is because I’ve been thinking about building a datalog API in front of our microservices (a la GraphQL), and started thinking that it might make sense to use datascript as a client-side cache to reduce the queries that have to actually hit the backend.
my thinking was that a query could respond with not only the result, but the datums that were resolved in the processing of the query. that way the client side could transact those into a client-side cache and future queries could potentially query the local db instead of sending a request to the server.
however, populating the cache for a query could end up accidentally being quite a lot of datums that need to be sent over the wire, even if the query result is small. so I was wondering if this same idea had been solved by some datomic <-> datascript integration.#2020-02-1902:04aisamuThis sounds a lot like Meteor's minimongo! (along with all the hard problems that came with it)#2020-02-1903:04lilactownI'm starting to think that datalog might be too general for this#2020-02-1913:43favilathere’s a reason graphql resembles pull expressions more than sql or datalog#2020-02-1913:45favilaIME the problems were 1) determining dependencies (i.e., do I have the thing I need to query or not) efficiently 2) expressing those things at the right granularity or even overlapping granularity 3) updating those things efficiently#2020-02-1913:46favilaa datomic peer can be really sloppy with this by just having lots of ram and a fast network and very large granularity (i.e. “giant seqs of sorted datoms”#2020-02-1913:47favilaon a remote, semi-untrusted client, you can’t do that#2020-02-1913:47favilayou need to send a lot less, and you need to make sure they can’t see “nearby” data which may not be theirs#2020-02-1914:57lilactownyeah that makes sense#2020-02-1915:00lilactownI guess the biggest downside of datalog for this use case is that it’s much harder to build an index on top of a set of queries you’re sending.#2020-02-1823:27Joe LaneSo, at that point you're putting your database on the internet.#2020-02-1823:30Joe Lane(If i'm understanding you correctly)#2020-02-1823:36lilactownI’m not sure what you mean by that.#2020-02-1823:52Joe LaneWhat is the first problem (the a la GraphQL) you're trying to solve? Why are you interested in building the datalog api? What brought you to the conclusion of "use datascript as a client-side cache"? What are you interested in caching?#2020-02-1900:13Joe LaneFWIW I think the most complete library that is close to what you're asking for is https://github.com/denistakeda/re-posh#2020-02-1900:14Joe LaneAnd all the libraries it depends on.#2020-02-1900:14lilactownwe have a lot of microservices that are currently exposed at various endpoints. I would like to be be able to query our system, via Datalog, to get a response that contains data from multiple endpoints.
E.g. if there’s a books and authors microservice, I’d be able to write a query on the client-side:
'[:find ?title ?author
:where
[?e :book/title ?title]
[?e :book/author ?author-id]
[_ :author/name ?author]]
and the service would query across the books and authors microservices to resolve the facts I want#2020-02-1900:15Joe LaneAhh, pathom is probably the closes thing to that 🙂#2020-02-1900:15lilactownright, I know of pathom but it isn’t datalog, it’s more akin to pull syntax#2020-02-1900:15Joe LaneHa, thats what I was just typing.#2020-02-1900:18Joe LaneIf you're going to make something to do this, I'd probably build it on top of pathom since it compiles indexes.#2020-02-1900:18Joe LaneI'm not familiar with anything that that will do it for you.#2020-02-1900:20Joe LaneDatascript is (obviously) datalog implemented in the browser. Another interesting one is built into https://github.com/arachne-framework/factui , which builds an impressive datalog on top of clara rules (in the browser!).#2020-02-1900:22Joe LaneTo be clear, what we are talking about now is pretty far away from the initial question of:
> has anyone used datascript as a client-side cache for datomic?
Nothing wrong with that, per se, it just sounds like datalog to query across n-services is a very different (more general) problem than client-side datomic cache.#2020-02-1900:26lilactownyes, the next step of the idea is that I would like to handle caching on the client of these queries so that it doesn’t have to send the request for datums that have already been requested from the microservices.#2020-02-1900:27lilactownand my thinking was: what if my datalog-query-service responded with all of the datums that were requested in order to resolve the query, and then the client transacted those to a local datascript db?#2020-02-1901:07lilactownDoes my question make more sense, now?#2020-02-1901:08lilactownI am looking for experience reports of using datascript to cache queries for a service that already uses datalog (datomic)#2020-02-1906:22pithylessThe closest thing I'm aware of is https://github.com/replikativ/datahike and the replikativ stack as an alternative trying to build a JS/JVM distributed data stack. But, I would also argue that your books and authors example sounds like you want to build a js-based datomic peer (that will fetch and cache datoms and do joins locally):
'[:find ?title ?author
:in $book-db $author-db
:where
[$book-db ?e :book/title ?title]
[$book-db ?e :book/author ?author-id]
[$author-db ?a :author/id ?author-id]
[$author-db ?a :author/name ?author]]
#2020-02-1914:56lilactowna client with caching is fairly similar to a datomic peer, I suppose!#2020-02-1913:08asierHi, we have a memory issue (system crashes) because we get many records (over a million) and then sort by time.#2020-02-1913:09asierThis is the schema:#2020-02-1913:09asier{:db/id #db/id[:db.part/db]
:db/ident :lock/activities
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many
:db/isComponent true
:db.install/_attribute :db.part/db}
{:db/id #db/id[:db.part/db]
:db/ident :activity/author
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one
:db.install/_attribute :db.part/db}
{:db/id #db/id[:db.part/db]
:db/ident :activity/at
:db/valueType :db.type/instant
:db/cardinality :db.cardinality/one
:db/index true
:db.install/_attribute :db.part/db}#2020-02-1913:09asierand the code that crashes is this:#2020-02-1913:09asier(sort-by :at
(mapv #(get-activity-data %) (:lock/activities lock)))
#2020-02-1913:10asierIs there a simpler way to get activities of a lock sorted?#2020-02-1913:12Joe Lane@asier does it crash if you just call (mapv #(get-activity-data %) (:lock/activities lock)) without sorting?#2020-02-1913:13asieryes#2020-02-1913:14Joe LaneSo, the issue isn't sorting then, maybe the issue is eagerly realizing a million+ entities in memory at once?#2020-02-1913:14favilaDo you need all results here? It seems like you simply can’t fit all activities in memory; what are you willing to give up?#2020-02-1913:15asierwe just need the newest 100 activities, but we don't know how to do it.#2020-02-1913:16Joe LaneCan you use the :avet index?#2020-02-1913:16favilaWould a time bound instead of a count bound be acceptable?#2020-02-1913:18favilaIf not, I think you need to rearrange your schema a bit so you can make a composite attr sorted how you want#2020-02-1913:20asierThanks both - I'll investigate further#2020-02-1913:20favila(Or, could you throw more ram at it)#2020-02-1913:21asierthat's an option, indeed#2020-02-1914:25asierWith this code we don't need to increase the memory:#2020-02-1914:26asier(sort #(compare (:activity/at %2)
(:activity/at %1))
(:lock/activities (or (d/entity db [:lock/id lock-id])
(d/entity db [:lock/serial-number lock-id])))#2020-02-1914:32favilahow is this different from your get-activity-data? (Which I now realize you never showed us)#2020-02-1914:43asier(defn get-activity-data
"Gets the attributes from entity"
[activity]
(merge
{:at (to-long (:activity/at activity))
:name (-> activity
:activity/kind
:activity-kind/name)
:desc (-> activity
:activity/kind
:activity-kind/desc)
:status (-> activity
:activity/kind
:activity-kind/status)
:image (->> activity
:activity/kind
:activity-kind/image
(str "assets/"))}
(if (:activity/author activity)
{:author (some-> activity :activity/author :user/username)}
{:author nil})))#2020-02-1916:08favilaAh, I see, you were building full result sets, and you couldn’t fit that in memory. but you can fit the things you sort by#2020-02-1916:08favilaconsider using a query instead of the entity API?#2020-02-1916:11favilae.g.
(->> (d/q '[:find ?activity ?at
:in $ ?lock
:where
[?lock :lock/activities ?activity]
[?activity :activity/at ?at]]
db lock-eid)
(sort-by peek)
(into []
(comp
(map first)
(take 100))))#2020-02-1916:13favilastill realizes everything you sort on, but should be lighter than using entity sets#2020-02-2013:03asierI take note - thanks!#2020-02-1914:46asierold code - from 2015 or so#2020-02-1916:00BrianHello! I'm trying to upgrade my system for the first time. I'm running a solo topology in Datomic Cloud. I've selected my root stack and used the formation template https://s3.amazonaws.com/datomic-cloud-1/cft/589-8846/datomic-storage-589-8846.json which I found from https://docs.datomic.com/cloud/releases.html however the stack update has failed with the status reason of Export with name XXX-MountTargetSecurityGroup is already exported by stack XXX-StorageXXX-XXX . I have never updated my system since I created it in August. Any guidance would be appreciated!#2020-02-1916:07Joe LaneHave you looked at https://docs.datomic.com/cloud/operation/upgrading.html and https://docs.datomic.com/cloud/operation/split-stacks.html ?#2020-02-1916:12BrianI was using the upgrading.html page but I was not using a split stack system. Let me try splitting the stacks and see if the problem persists after that. Thanks @lanejo01#2020-02-1916:58BrianWhere should I go from here? My system is now down with the ec2 instance terminated so some part of that delete worked#2020-02-1917:00Joe LaneI'd open a ticket at this point, sorry I can't be more helpful ATM.#2020-02-1917:00BrianNo problem. Thanks for the help!#2020-02-1917:55BrianI resolved the above issue by navigating to the ENIs page and deleting the ENIs manually#2020-02-1918:14Joe Lane@brian.rogers Did you then split the stack and upgrade?#2020-02-1918:14BrianCurrently in the process of doing so!#2020-02-1918:14BrianHave not yet finished#2020-02-1918:14Joe LaneGreat to hear :+1:#2020-02-1921:36marshallANN:
Datomic CLI Tools 0.10.81 now available:
https://forum.datomic.com/t/datomic-cli-0-10-81-now-available/1363
Check out the video overview: https://docs.datomic.com/cloud/livetutorial/clitools.html#2020-02-2006:42tatutdoes it need some configuration? I'm getting Error building classpath. Could not find artifact com.datomic:tools.ops:jar:0.10.81 in central () when trying to run datomic command#2020-02-2008:14maxtI get the same#2020-02-2008:18maxtIf I add the datomic cloude s3 repo to deps I can get it to work
:mvn/repos {"datomic-cloud" {:url ""}}#2020-02-2015:20marshallIt should have been on Maven central; we are looking again - it should show up there soon if we need to re-release#2020-02-2016:59timcreasyLooks like it’s up now :+1: I had been hitting this same issue.#2020-02-2008:10maxt@marshall Thank you! I especially like the log list command. Would those commands also be available to call from a repl? That would be my prefered way of working. Then I don't need to have another window open, I can save some startup time, and I don't have to parse text to process the output further.#2020-02-2015:19maxtTurns out using it from the REPL works great
(require 'datomic.tools.ops)
(datomic.tools.ops.cloud/list-systems {})
(datomic.tools.ops.system/list-instances {:system "example"})
(datomic.tools.ops.log/events {:group "example"
:minutes-back 10
:tod (java.util.Date.)})
#2020-02-2011:28pezSomeone else tried to deploy on-prem in AWS eu-north-1? I get an error Not a supported dynamodb region: eu-north-1 - (You'll never learn).#2020-02-2012:43marshallThe included launch scripts dont currently support that region. You can likely provision and launch manually there. I will also look into adding support for the region in an upcoming release#2020-02-2012:50pezIn our case we get the error message when the peer is trying to connect, so this seems to go deeper than the launch script.#2020-02-2014:18marshallah; i think i know what the issue is; one minute#2020-02-2014:25marshallI believe I have a workaround that will work for you.
In your transactor properties file, change the protocol to ddb-local:
protocol=ddb-local
Then comment out the aws-dynamodb-region line:
#aws-dynamodb-region=
Finally, set the aws-dynamodb-override-endpoint to the address of the DDB endpoint:
aws-dynamodb-override-endpoint=
The use of ddb-local as the protocol will allow the system to honor the override configuration.
Similarly, you will need to use the ddb-local URI for your peer:
#2020-02-2014:31pezThanks a lot! Right now we're moving things back to eu-central-1, but we might try this again later this week. Will let you know if we do and how we fare, if so.#2020-02-2014:32marshallI will also add a feature request for region support for that region#2020-02-2014:33pezThat would certainly help us. We will probably stick with on-prem on AWS for a while, and we serve only Sweden.#2020-02-2014:48marshallhttps://feedback.eu.pendo.io/app/#/case/117834#2020-02-2015:20marshallWe don’t currently support running the tools from a REPL; caveat emptor so to speak#2020-02-2016:21uwoWhen overriding the default table name in a metaschema, is that name then used to match with Datomic attributes and determine the columns for the associated table?
(https://docs.datomic.com/cloud/analytics/analytics-metaschema.html#name-option)#2020-02-2022:09uwojust to follow up, the answer looks to be no. As in the name-opt isn't used to match attributes whose munged namespace would match. Need to use :include to capture them. (let me know if I'm missing something!)#2020-02-2016:42mdhaneyI’m trying to automate my Ion deployments with a Github workflow, but I can’t figure out how to handle polling for the deployment status. I was wondering if anyone else has done this, or even with a different CI tool how you handled the polling.#2020-02-2017:22maxtI'm doing it on circle CI. This is my deploy function
;; Inspired by
(defn ions-release
"Do push and deploy of app. Supports stable and unstable releases. Returns when deploy finishes running."
[{:keys [group] :as args}]
(try
(let [push (requiring-resolve 'datomic.ion.dev/push)
deploy (requiring-resolve 'datomic.ion.dev/deploy)
deploy-status (requiring-resolve 'datomic.ion.dev/deploy-status)]
(println "Pushing" args)
(let [{:keys [dependency-conflicts deploy-groups] :as push-data} (push args)]
(assert (contains? (set deploy-groups) group) (str "Group " group " must be one of " deploy-groups))
(let [delay-between-retries 1000
deploy-args (merge (select-keys args [:creds-profile :region :uname :group])
(select-keys push-data [:rev]))
_ (println "Deploying" deploy-args)
deploy-data (deploy deploy-args)
deploy-status-args (merge (select-keys args [:creds-profile :region])
(select-keys deploy-data [:execution-arn]))]
(when dependency-conflicts
(clojure.pprint/pprint dependency-conflicts))
(println "Waiting for deploy" deploy-status-args)
(loop []
(let [status-data (deploy-status deploy-status-args)]
(if (= "RUNNING" (:code-deploy-status status-data))
(do
(print ".")
(flush)
(Thread/sleep delay-between-retries) (recur))
(do (println)
status-data)))))))
(catch Exception e
{:deploy-status "ERROR"
:message (.getMessage e)})))
#2020-02-2016:58NicoI have an attribute that has a cardinality of many, how do I test in a query that all entries in it aren't equal to a certain thing?#2020-02-2016:59NicoI can do [?e :tags ?t] [(not= :tags [whatever])] but that returns items that have the tag I don't want, because they also have other tags#2020-02-2017:08NicoI just realised not clauses were a thing, but that still doesn't completely solve the problem#2020-02-2017:09favilaIf you are using on-prem, call a function that checks; this is very tedious in pure datalog#2020-02-2017:11favilaa pure datalog solution will be a variation of this: https://stackoverflow.com/questions/43784258/find-entities-whose-ref-to-many-attribute-contains-all-elements-of-input/43808266#43808266#2020-02-2017:12Nicoah ok, thanks#2020-02-2017:16favilaAn on-prem function implementation looks something like this:
(defn not-any-matching-eav? [db e a test-v]
(zero? (->> (datomic.api/datoms db :eavt e a test-v)
(bounded-count 1))))#2020-02-2019:11shaunxcodeis there any way with the pull syntax to indicate you only want the single value (v.s. a collection containing one value). With query we can do [:find ?x . :in ....] is there something similar (only thing I could find in docs is ability to indicate limit). e.g. what about the case where you know there is only one term e.g. [:person/id [{:person/_child [:person/id]} :as :person/parent]] and I do not want :person/parent to be "boxed"? Like I would like the result to be [{:person/id :x :person/parent {:person/id :y}}] not [{:person/id :x :person/parent [{:person/id :y}]}]#2020-02-2023:17csmI’m thinking of using a tuple of two instants to represent a validity period, and a query would look something like [:find ?cap :in $ :where [?cap :capability/period ?period] [(ground (java.util.Date.)) ?now] [(first ?period) ?starts] [(second ?period) ?ends] [(< ?starts ?now)] [(< ?now ?ends)]]. Is that an appropriate use of tuples? Is first and second the right way to pull those values out?#2020-02-2100:21shaunxcodeyou might consider [... :where ... [(untuple ?period) [?starts ?ends]] ...]#2020-02-2101:10csmthat’s exactly what I wanted… the docs for untuple confused me#2020-02-2110:28tatutI'm trying to run tests in github actions (just clojure -A:test command) and it seems it doesn't find datomic jars (I have the s3 releases bucket as a repo)#2020-02-2110:30tatutspecifically the com.datomic/ion {:mvn/version "0.9.35"} can't be found#2020-02-2113:45maxtMy first guess would be missing aws credentials. You need to be authed to fetch from that repo.#2020-02-2113:59tatutI don't think our codebuild in aws is authed, it's just in the n. virginia region#2020-02-2117:10m0smithIn Datomic cloud, is it possible to do a dry run of a retractEntity. That is, have it return what it would retract but not actual do the retraction?#2020-02-2117:28Joe Lane@m0smith use https://docs.datomic.com/client-api/datomic.client.api.html#var-with and with-db#2020-02-2117:33m0smithThanks @lanejo01! That did the trick#2020-02-2118:15jarethttps://forum.datomic.com/t/datomic-cloud-616-8879/1364#2020-02-2123:03dvingodatomic cloud question: We are attempting to set up a new cloudformation stack of an existing system (existing ions code, datomic schema, and data)
The stack is setup, schema is transacted and we're attempting to confirm that data is loaded. When executing a query via HTTP through API gateway an exception is thrown. It appears to happen on the invocation to datomic.client.api/connect.
The relevant stack trace lines are:
[
"datomic.anomalies$throw_if_anom",
"invoke",
"anomalies.clj",
113
],
[
"datomic.client.impl.local.Client",
"connect",
"local.clj",
192
],
[
"datomic.client.api$connect",
"invokeStatic",
"api.clj",
133
],
and
"Cause": "Supplied AttributeValue is empty, must contain exactly one of the supported datatypes (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException; Request ID: JF41UQTURAKQ0ITO8LFP94LBEVVV4KQNSO5AEMVJF66Q9)"
}
}
},
"At": [
"datomic.anomalies$throw_if_anom",
"invokeStatic",
"anomalies.clj",
119
]
The config map supplied to connect are read from amazon SSM parameters:
(defn cloud-db-config []
{:server-type :ion
:region (ssm/get-ssm-param "region")
:system (ssm/get-ssm-param "system")
:endpoint (ssm/get-db-endpoint)
:timeout (ssm/get-ssm-param "timeout")
:proxy-port 8182})
I believe these values are all present in SSM (I have to debug through an ops team member due to restricted environment access... so taking their word for it)
We confirmed that we can query the database from a REPL connection by passing a manually constructed db-config map.
I'm wondering if anyone has seem something like this before or if there is something obvious that I'm overlooking.#2020-02-2200:05dvingosooooo. redeploying the ion app seems to have magically fixed things#2020-02-2217:32cobyHey folks, Datomic n00b here. I'm working inside a simple demo app generated with lein new luminus luminus-example +datomic. (It generated the db-core ns called below, where my conn lives.) I can run queries just fine but can't transact. When I call this code:
(defn create-post! [{:keys [type title slug content]}]
(let [id (java.util.UUID/randomUUID)
type (or type :page)
title (or title (str "Page " id))
slug (or slug (title->slug title))
content (or content [])]
(d/transact db-core/conn {:tx-data [{:post/id id
:post/type type
:post/slug slug
:post/title title
:post/content content}
{:db/add "datomic.tx"
:db/doc "create post"}]})))
(create-post! {})
I get:
Execution error (ClassCastException) at datomic.api/transact (api.clj:96).
class clojure.lang.PersistentArrayMap cannot be cast to class java.util.List (clojure.lang.PersistentArrayMap is in unnamed module of loader 'app'; java.util.List is in module java.base of loader 'bootstrap')
Am I doing something obviously wrong? Which map is it trying to cast to a list?#2020-02-2217:53markaddlemanAre you using Datomic Cloud or on-prem? I believe the APIs for those two are a bit different.
Specifically, the second argument to transact is a map in Cloud but I do not think it is in on-prem#2020-02-2219:23cobyI'm using on-prem, following this tutorial which is passing a map:
https://docs.datomic.com/on-prem/tutorial.html#2020-02-2223:05ghadiYou should write a function that outputs the tx-data so that you can inspect it before transacting#2020-02-2223:05ghadi try to pull apart things that transact from things that generate tx data#2020-02-2305:25cobyI actually did try that and got what I expected.
(create-post-data {})
;; => [#:post{:id #uuid "b05f2baa-ec01-485c-91cc-abf2f8fe5256",
;; :type :page,
;; :slug "page-b05f2baa-ec01-485c-91cc-abf2f8fe5256",
;; :title "Page b05f2baa-ec01-485c-91cc-abf2f8fe5256",
;; :content []}
;; #:db{:add "datomic.tx", :doc "create post"}]
(create-post-data {:type :page :title "New Page!"})
;; => [#:post{:id #uuid "5665a149-90cf-42ff-bef1-a10d46881a3e",
;; :type :page,
;; :slug "new-page!",
;; :title "New Page!",
;; :content []}
;; #:db{:add "datomic.tx", :doc "create post"}]
(create-post-data {:type :page :title "New Page!" :slug "new-page" :content ["some" "content"]})
;; => [#:post{:id #uuid "82a510c7-8160-4e22-96fd-74d15239ef8b",
;; :type :page,
;; :slug "new-page",
;; :title "New Page!",
;; :content ["some" "content"]}
;; #:db{:add "datomic.tx", :doc "create post"}]#2020-02-2305:43cobyHmm, this seems to be a deeper issue that's not related to create-post! at all. I'm getting it for any call to transact.
(d/transact db-core/conn
{:tx-data
[[:db/add
[:post/id #uuid "b2e32ece-c6b6-4936-ba39-d24f717dcd4d"]
:post/slug "updated-slug"]]})
;; => Execution error (ClassCastException) at datomic.api/transact (api.clj:96).
;; class clojure.lang.PersistentArrayMap cannot be cast to class java.util.List (clojure.lang.PersistentArrayMap is in unnamed module of loader 'app'; java.util.List is in module java.base of loader 'bootstrap')#2020-02-2312:09andrewzhurovdoes this hit the spot?
https://stackoverflow.com/a/53360679#2020-02-2318:14cobyyep, that sounds like exactly my issue. So here's my understanding:
• running bin/run -m datomic.peer-server ... and connecting per the tutorial uses the Client API (map version)
• My Luminus app is using the "in-process peer library" as described here: https://docs.datomic.com/on-prem/clients-and-peers.html
• Seems like the recommendation is to go with the Client API...which is still in alpha??#2020-02-2219:17currentoorHas anyone used datascript as a write-through-cache for datomic to achieve offline mode?
I’m building a POS, clients are react native iOS. Datascript (persisted in client-side blob store) as a write-through cache seems compelling because then I can write most of my queries in CLJC and re-use them on client and server. And in offline mode I can restrict the POS to only allow accretion of new data, so I don’t have deal with DB conflicts.#2020-02-2316:20Joe Lane@currentoor From a few days ago, https://clojurians.slack.com/archives/C03RZMDSH/p1582066433438000#2020-02-2316:25currentoorAh thanks, exactly the advice I was hoping for#2020-02-2320:23emTrying to puzzle out not-join and when it is necessary. The docs example is:
[:find (count ?artist)
:where [?artist :artist/name]
(not-join [?artist]
[?release :release/artists ?artist]
[?release :release/year 1970])]
The docs mention this means that ?artist is the only variable unified and the other inside the not-join, namely ?release, is not. Why would a simple (not) clause here not work, if there doesn’t seem to be a ?release to be unified outside the clause anyway? Or is this just some kind of performance thing?#2020-02-2320:25joshkhif i'm reading the docs correctly, one needs to split their solo stack in order to upgrade datomic. are there any disadvantages to doing this (e.g. pricing)? and if not, then why does the solo topology start with a combined stack?#2020-02-2320:44em@U0GC1C09L The master combined stack is necessary for the AWS Marketplace integration, but makes operational tasks harder for Datomic, as architecturally the persistent storage stack and the compute nodes are separate. For example, you could take down the compute nodes completely and your underlying storage would be unaffected, allowing you to upgrade either separately or do whatever you want. For solo the system is so small (and has no HA guarantees) so it’s fine to lump everything together, but for production you’ll need to split the stacks (or more correctly “untangle” them because they weren’t meant to be together anyway). There’s no pricing disadvantage to split stacks, the same resources are just described in two cloudformation templates.#2020-02-2320:49joshkh> The master combined stack is necessary for the AWS Marketplace integration
cool, that answers my question. i'm coming from the production->solo perspective for hobby purposes. thanks @UNRDXKBNY#2020-02-2404:33mvIs there a good tutorial or guide I can read for advice on which libraries to choose to make a datomic backed web api?#2020-02-2407:42dmarjenburghAfter ionizing your request-handler, it’s not really different from building a web api with ring/pedestal.#2020-02-2407:44dmarjenburghThis was assuming datomic-cloud…#2020-02-2408:44mkvlrI recently came across https://blog.acolyer.org/2019/06/17/towards-multiverse-databases/ and found it quite interesting. The central idea behind multiverse databases is to push the data access and privacy rules into the database itself. The database takes on responsibility for authorization and transformation, and the application retains responsibility only for authentication and correct delegation of the authenticated principal on a database call. Such a design rules out an entire class of application errors, protecting private data from accidentally leaking. Is anybody aware of a similar thing being tried for datomic?#2020-02-2408:48mkvlr@U4YGF4NGM and @U09R86PA4: I think it should also solve the security issues when trying to cache parts of datomic locally?#2020-02-2420:39cobyWow, fascinating! I'd guess that probably no one's working on this (would love to be wrong!), but I imagine given the degree of immutability and lazy eval built into Datomic already, it'd have a leg up on any other database software trying to add this in. I could imagine the API being as simple as:
(def db (d/with-context {:user-id uid}))#2020-02-2413:20Arek FlinikHas anybody tried running Datomic on top of MSSQL? (please don’t blame me, talking with a potential enterprise customer that insists on doing that because “based on their experiences PostgreSQL is a terrible choice” 🙄)#2020-02-2416:48favilaI’ve run it on top of mysql. You’ll be fine#2020-02-2416:49faviladatomic uses sql as a key-value store for binary blobs#2020-02-2416:49favilaoptimize your tables and schema for that workload#2020-02-2414:48marshall@aflinik Many of our customers run on SQL Server#2020-02-2420:36Arek FlinikThanks! Would be able to share some insights about potential pitfalls, differences in performance characteristics, or any other learnings?#2020-02-2515:03marshallGenerally speaking most SQL stores are fairly reliable
We have many customers using postgres, SQL Server, and Oracle
Like most Datomic storage options, the most common issues are usually general misconfiguration of storage itself. If you’re comfortable running the storage and/or have a good DBA who knows it well, they all perform fairly similarly#2020-02-2417:59arohnerAre in-memory databases still supported for testing purposes? I’m having trouble finding docs on how to set that up#2020-02-2418:32favilaIf you’re talking about on-prem, yes definitely. If you’re talking about cloud AFAIK it has never supported in-memory? (why say “still supported”?)#2020-02-2418:11arohnerIn production I plan to use datomic client against an on-prem transactor. What is the best way to write tests against that?#2020-02-2418:12arohnerTo get the in-memory database it looks like the app needs to include the full datomic-pro dependency#2020-02-2419:10favilaCognitect’s official recommendation is “use random database names and an aws connection for testing”#2020-02-2419:10favilaThis thing also exists, but requires datomic-pro as you noticed: https://github.com/ComputeSoftware/datomic-client-memdb#2020-02-2514:49joshkhmy Ions deployments to a specific query group started failing today due to an error reported by the BeforeInstall script: There is insufficient memory for the Java Runtime Environment to continue. has anyone else experienced this?#2020-02-2514:50joshkhthe project deployed to the query group has no problem running queries. it's the deployment itself that fails.#2020-02-2515:02marshall@joshkh what size instance is the query group?#2020-02-2515:13joshkht3.medium#2020-02-2515:14marshallhttps://docs.datomic.com/cloud/ions/ions-reference.html#jvm-settings
the t medium instance only have 2582m heap#2020-02-2515:15joshkhi did start playing with Datomic Analytics yesterday, although i'm using my main compute group for that. still, could that affect an unrelated query group?#2020-02-2515:16marshallshouldnt
the analytics server itself runs on the gateway instance and sends queries to whatever group you’ve configured (or default to the primary compute)#2020-02-2515:17joshkhright, that's what i thought. okay, we can look into increasing our instance size. thanks @marshall.#2020-02-2515:18joshkhjust curious though - wouldn't the heap have more of an effect on a running project? this happens when i initiate a deployment, which fails almost immediately.
LifecycleEvent - BeforeInstall
Script - scripts/install-clojure
[stdout]Clojure 1.10.0.414 already installed
Script - sync-libs
[stderr]OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000ee000000, 32505856, 0) failed; error='Cannot allocate memory' (errno=12)
[stdout]#
[stdout]# There is insufficient memory for the Java Runtime Environment to continue.
[stdout]# Native memory allocation (mmap) failed to map 32505856 bytes for committing reserved memory.
#2020-02-2515:18marshallhave you tried cycling the instance?#2020-02-2515:19marshallthat looks like a wedged instance#2020-02-2515:19marshallif it can’t allocate 32M#2020-02-2515:22joshkhwe tried autoscaling a second instance which came up just fine. then we tried to redeploy or code to fix the wedged instance, but the deployment failed due to a 120s sync libs error#2020-02-2515:23marshallwhat version are you running?#2020-02-2515:23joshkhoops, incoming edit 😉 above#2020-02-2515:24marshallyou should update your ion-dev version https://docs.datomic.com/cloud/releases.html#ion-dev-251#2020-02-2515:24joshkhyes, i was very excited to see that!#2020-02-2515:24marshallthen cycle your instance(s)#2020-02-2515:25joshkhwill do, thanks Marshall#2020-02-2516:24kennyI am getting this exception ~1/day. Any idea why this would occur?
clojure.lang.ExceptionInfo: Datomic Client Exception
{:cognitect.anomalies/category :cognitect.anomalies/fault, :http-result {:status 500, :headers {"content-length" "32", "server" "Jetty(9.4.24.v20191120)", "date" "Sun, 23 Feb 2020 17:08:37 GMT", "content-type" "application/edn"}, :body nil}}
at datomic.client.api.async$ares.invokeStatic (async.clj:58)
datomic.client.api.async$ares.invoke (async.clj:54)
datomic.client.api.sync.Client.list_databases (sync.clj:71)
datomic.client.api$list_databases.invokeStatic (api.clj:112)
datomic.client.api$list_databases.invoke (api.clj:106)
compute.db.core.DatomicClient.list_databases (core.cljc:71)
datomic.client.api$list_databases.invokeStatic (api.clj:112)
datomic.client.api$list_databases.invoke (api.clj:106)#2020-02-2516:25ghadiis that the full stacktrace? what was the user code that caused it?#2020-02-2516:26kennyNo but it's the only relevant part. It's caused by datomic.client.api$list_databases#2020-02-2516:27kennyThis is line 71 in compute.db.core:
(let [dbs (d/list-databases client arg-map)]
#2020-02-2516:29ghadinot sure, but you should try to correlate it with logs in cloudwatch#2020-02-2516:30ghadihttps://docs.datomic.com/cloud/operation/cli-tools.html#log#2020-02-2516:30ghadiBTW ^ new Datomic CLI tools#2020-02-2516:35kennyIt looks nice but will require a couple things to happen before we can update.#2020-02-2516:39ghadiyou appear to be on the latest 616 compute#2020-02-2516:40ghadiyou can use the datomic cli tools fine with that#2020-02-2516:36kennyDon't use CW logs often. It felt like a battle to get to the logs I wanted 😵 Should I upload them here? There's 2 relevant lines.#2020-02-2516:37kenny#2020-02-2516:37kenny#2020-02-2516:37kennyDidn't look like any sensitive info so added them to the thread there ^#2020-02-2516:38ghadithanks -- that is probably useful to @marshall. Need your datomic compute stack version # too#2020-02-2516:38ghadithanks -- that is probably useful to @marshall. Need your datomic compute stack version # too#2020-02-2516:40kennyDatomicCFTVersion: 616
DatomicCloudVersion: 8879#2020-02-2516:40ghadithanks#2020-02-2516:38ghadiseems like a server side bug from the stacktrace#2020-02-2516:39kennyIt's weird how it happens so infrequently.#2020-02-2518:08jaretKenny, I am logging a case to capture this so we can look at it. I think we have everything we need, but wanted to let you know in case you see an e-mail come your way from support.#2020-02-2518:13kennyGreat, thanks. #2020-02-2516:47uwoI'm setting Xmx and Xms when running the (on-prem) peer-server. I just noticed that it appears to start with its own settings for those flags.
CGroup: /system.slice/datomic.service
├─28220 /bin/bash /var/lib/datomic/runtime/bin/run -Xmx4g -Xms4g ...
└─28235 java -server -Xmx1g -Xms1g -Xmx4g -Xms4g -cp ...
Should I be setting those values thru configuration elsewhere?#2020-02-2517:02uwoAh, I see where they're hard coded into the bin/run script. Perhaps I don't need to treat the peer server like other app peers?Repeated flag precedence may differ across versions/distros#2020-02-2521:52hadilsI am developing an application that reqquires storage of millions of transactions and I am concerned about the limitations of Datomic Cloud. Should I use a separate store (e.g. DynamoDB) for the transactions or are there ways to scale Datomic Cloud? Welcome any feedback anyone might have...#2020-02-2522:14Joe Lane@hadilsabbagh18 Which limitations are you concerned about? What do you mean "Ways to scale Datomic Cloud?" ?#2020-02-2522:26hadilsI am concerned about storage for now. #2020-02-2522:27hadils@lanejo01 #2020-02-2522:27ghadi@hadilsabbagh18 millions of transactions is fine with Datomic Cloud. (Disclaimer: I work for Cognitect, but not on Datomic) If you really want to do it right, you'll want to estimate the cardinality of your entities, relationships, and estimate the frequency of change... all in all you need to provide more specifics#2020-02-2522:28ghadiWith the disclaimer that there is no application specifics, DynamoDB has very very rudimentary query power#2020-02-2522:38hadils@ghadi -- I will make an estimate. Who should I be talking to about this?#2020-02-2522:39ghadi@marshall is a good person to talk to#2020-02-2522:39hadils@ghadi Thanks a lot. I will be more specific when talking to @marshall.#2020-02-2522:39ghadino problem#2020-02-2523:42steveb8nQ: has anyone used aws x-ray inside Ions? I want to do this (i.e. add sub-segments) but I’m wary of memory leaks etc when using the aws client in Ions. Any war stories or success stories?#2020-02-2523:49Sam DeSotaIs it possible to pass a temp id to a tx-fn and resolve the referenced entity in the tx-fn? Example:
(def my-tx-inc [db ent attr]
(let [[value] (d/pull db {:eid ent :selector [:db/id attr]})]
[:db/add (:db/id value) attr (inc (attr value))]))
{:tx-data [{:db/id "tempid" :product/sku 1} '(my-ns/my-tx-inc "tempid" :product/likes)]}#2020-02-2523:52favilaNo. You must treat tempids as opaque. What they resolve to is unknowable until all datoms are expanded#2020-02-2523:53Sam DeSotaGot it. Thank you.#2020-02-2523:53favilaFor example some other tx fn may return something that asserts the tempid has a particular upserting attribute. That changes how it would resolve #2020-02-2523:50Sam DeSotaAs is, this doesn't seem to work#2020-02-2523:50ghadi@steveb8n I’ve done them#2020-02-2523:51ghadiYou need the xray sdk for Java but not the aws-sdk for java auto instrumenter#2020-02-2523:52steveb8ngreat. not the Cognitect aws client? Just aws java interop?#2020-02-2523:52ghadiInterop#2020-02-2523:53ghadiKeep in mind the amazon xray sdk for java is a completely separate sdk than the aws java sdk (not a subset)#2020-02-2523:53steveb8nok. I’ll give that a try. is there a sample snippet out there somewhere? I don’t need one but it seems like a good thing for docs#2020-02-2523:54ghadiNo, sorry, but the aws docs were accurate#2020-02-2523:54ghadiAnd helpful#2020-02-2523:54steveb8nok, good info. thanks#2020-02-2600:19csmone can (and should) split stacks in a solo cloud install, correct? And recreate the storage stack and solo compute stack?#2020-02-2612:11vemvHi. I use datomic on-prem (via the Peer API) with a number of Datomic installations (think production, staging etc)
In most of them everything is OK, but for one of them, query latency is consistently high; 1500ms, for the simplest, cheapest possible query.
Indices are in place. We also tried to rule out basic stuff (configuration/environment differences, etc)
What are some possible causes, or things we can do to troubleshoot this?#2020-02-2612:11vemvHi. I use datomic on-prem (via the Peer API) with a number of Datomic installations (think production, staging etc)
In most of them everything is OK, but for one of them, query latency is consistently high; 1500ms, for the simplest, cheapest possible query.
Indices are in place. We also tried to rule out basic stuff (configuration/environment differences, etc)
What are some possible causes, or things we can do to troubleshoot this?#2020-02-2612:12vemvIn case it helps:
how come the latency is slow every time?
Given the Datomic architecture (AIUI), queries should be in-memory, with updates being pushed over the wire.
So after a slow first query, subsequent queries should be fast... but nope.#2020-02-2615:21marshall@vemv what is different about the one that is high latency?#2020-02-2615:22vemvWe haven't noticed any significant difference - DB size, indices, infrastructure details#2020-02-2703:57mavbozo@vemv how many peers you have? is it just 1 peer - 1 transactor - 1 storage ?#2020-02-2703:58mavbozomaybe in staging you have only 1 peer but in production you have multiple peers and 1 of the peers has high latency?#2020-02-2707:37vemvWe have:
1 Peer Lib per webapp instance
2 transactors
1 postgresql DB#2020-02-2707:38vemv> maybe in staging [...]
All our environments are alike in terms of size, topology etc#2020-02-2616:51dvingoI noticed in on-prem there is an optional :db/index attribute (https://docs.datomic.com/on-prem/schema.html#operational-schema-attributes) but I don't see one in the cloud docs. Are all attributes indexed in cloud?#2020-02-2617:00dvingoI don't see it exlicitly stated in the docs, just alluded to here:
"Datomic datalog queries automatically use multiple indexes to support a variety of access patterns" (https://docs.datomic.com/cloud/whatis/data-model.html)
The differences page, doesn't mention it other than the full-text difference:
https://docs.datomic.com/on-prem/moving-to-cloud.html#text-search#2020-02-2617:01ghadiall attributes are indexed in cloud#2020-02-2621:07emAre lookup refs not valid values for transactions for ref attributes in cloud? As per https://blog.datomic.com/2014/02/datomic-lookup-refs.html Similarly, they can be used in transactions to build ref relationships. but I get “entity not resolved” with the full tuple as an error in cloud#2020-02-2621:14ghadiwhat did you try and what did it error?#2020-02-2622:25em@ghadi
(d/with with-db {:tx-data [{:space/uuid #uuid "some-space-uuid"
:space/devices [:device/id "some-device-id"]}]})
:space/devices is cardinality many, type ref. I realized the issue with this version is the cardinality many means that the vector is interpreted as a vector of refs, so i’m actually pointing to the ident :device/id and an unknown tempid.
I then tried another version:
(d/with with-db {:tx-data [{:space/uuid #uuid "some-space-uuid"
:space/devices [[:device/id "some-device-id"]]}]})
And this time after querying the database and looking at the committed transactions it seemed like nothing changed. Solved this issue just splitting the reference into temp ids as mentioned (coincidentally) in the latest conversation here, but was just wondering if it’s possible to use lookup refs as the value in ref type attributes.#2020-02-2622:26ghadiwhat error did you get?#2020-02-2622:27ghadiand what is the schema definition of :space/uuid and :space/devices?#2020-02-2622:28em:space/uuid is identity, type uuid
:space/devices is type ref, cardinality many
No error on the second one, just nothing updated (looking at the after-db, the intended space->device datom was not found)#2020-02-2622:29emFirst one errored
tempid 'd5f2962bd37b24c1c7cb076b9053ae77' used only as value in transaction
which made sense per above reasoning, I then added another pair of square brackets to intend it as a lookup ref#2020-02-2622:32em#:db{:ident :space/devices,
:valueType :db.type/ref,
:cardinality :db.cardinality/many}
#:db{:ident :space/uuid,
:valueType :db.type/uuid,
:cardinality :db.cardinality/one,
:unique :db.unique/identity}#2020-02-2622:35ghadithat second try seems like it should have worked....#2020-02-2622:35ghadihang on to the transact return value if you try it again#2020-02-2622:56emTurns out another false alarm, thanks for your time! Am I correct to assume that datomic will automatically infer the meaning of a vector based on both schema ident cardinality and type? Got a little confused from how overloaded the square brackets were#2020-02-2621:27marshall@eagonmeng what version of cloud?#2020-02-2622:10em@U05120CBV Hmm, was there a rollback for the released version of cloud? I’m on DatomicCFTVersion 616, which I believe was just released recently on 2/21 (https://webcache.googleusercontent.com/search?q=cache:cWl9SuBuZeMJ:https://docs.datomic.com/cloud/releases.html+&cd=1&hl=en&ct=clnk&gl=us) but it’s mysteriously disappeared off the main page (https://docs.datomic.com/cloud/releases.html)#2020-02-2622:15marshallNo thats a doc issue. Will fix#2020-02-2622:46marshallfixed#2020-02-2621:28marshallnote: https://docs.datomic.com/cloud/releases.html#569-8835
• (Fix: resolve tempids for reference attributes inside tuples.)#2020-02-2621:29marshallhowever, you should move to the latest; it includes a fix for a different regression in that version#2020-02-2622:04hadilsI need to write to several different "schemas" in a single Datomic transaction, e.g., phone numbers, emails accounts, person information and address information. All of these write create eids which are then stored in a customer. I need to do this atomically, but I cannot figure out how to recover eids from within a transaction function. Any ideas?#2020-02-2622:06shaun-mahood@hadilsabbagh18 Do you need a transaction function, or can you use tempids (https://docs.datomic.com/cloud/transactions/transaction-processing.html#tempid-resolution)?#2020-02-2622:07ghaditransaction functions don't resolve tempids#2020-02-2622:08ghaditransaction functions (the ones that run inside datomic) return transaction data, which is later committed#2020-02-2622:09hadils@ghaid @shaun-mahood Understood. How do I create such a transaction atomically?#2020-02-2622:09ghadiif you need to transact something compound in the same Datomic transaction, you just give it to transact:
(d/transact conn {:tx-data [thing1 ... thing2.... thing3]})#2020-02-2622:09ghadithe things can be arbitrary#2020-02-2622:10ghaditransaction functions are something different and should be used to achieve specific purposes (they're like macros that run while the tx is being committed)#2020-02-2622:11hadilsThanks @ghadi. I have a one->many relationship between phone numbers and persons. How would I do that in a single transaction?#2020-02-2622:12ghadiI don't mean to brush off the question, but you should really go through the tutorials#2020-02-2622:12ghadiall of this is covered#2020-02-2622:12ghadiand I'll do a bad job explaining 😃#2020-02-2622:13ghadiyou're using Cloud?#2020-02-2622:13shaun-mahoodI've got an example I can paste in here - will redact it quickly and throw it in a thread (the docs are great, but I sometimes need a few extra examples to figure things out the first time)#2020-02-2622:15shaun-mahood(d/transact conn {:tx-data
[[:db/add "request" :request/date request-date]
[:db/add "client" :client/name client-name]
[:db/add "job" :job/name job-name]
[:db/add "job" :job/client "client"]
[:db/add "job" :job/request "request"]]})#2020-02-2622:16shaun-mahood@hadilsabbagh18#2020-02-2622:21hadilsThanks!#2020-02-2622:22hadilsThanks!#2020-02-2622:13hadilsThanks @ghadi. I have gone through all the tutorials and the videos.#2020-02-2622:14hadils@shaun-mahood you can email it to me at <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>#2020-02-2622:14hadilsif you wish.#2020-02-2622:14ghadihttps://docs.datomic.com/cloud/transactions/transaction-processing.html#tempid-resolution#2020-02-2622:15ghadilinking tempids together is how you assert relationships between things#2020-02-2622:16hadils@ghadi. The problem is that i need to commit data atomically, not how to retrieve tempids.#2020-02-2622:16ghadi[{:db/id "customer1"
:customer/name "hadils"}
{:db/id "customer2"
:customer/name "ghadi"
:friends/with "customer1"}]#2020-02-2622:17ghadithings sent in the same tx-data are committed together#2020-02-2622:17ghadiin that example, two new entities are created, and the second entity points to the first entity#2020-02-2622:17ghaditwo tempids "customer1" "customer2" will be resolved into two entity-ids when committed#2020-02-2622:18ghadiyou can send in arbitrary graphs of entities in the same tx#2020-02-2622:18hadils@ghadi. I am with you. So I need to not use tempids then, I need some other reference for a one-many relationship.#2020-02-2622:18ghadi(those are tempids above)#2020-02-2622:19ghadiif you have a unique ID you can send that in instead of the tempid -- but the end result is the same: datomic needs to know what entities you're asserting things about#2020-02-2622:19hadils@ghadi. yes i see that now. is :firends/with a fre value type?#2020-02-2622:19ghadiyes#2020-02-2622:20hadilsthanks. that answers my question.#2020-02-2622:20steveb8nthis got me as well. in cloud, tempids are just strings in the :db/id#2020-02-2622:21ghadihttps://docs.datomic.com/cloud/transactions/transaction-data-reference.html#identifier#2020-02-2622:22ghadithe tx data grammar at the top of that page is useful, along with the examples#2020-02-2622:22shaun-mahoodI think my biggest stumbling blocks in learning Datomic have all been related to how simple it is at the core#2020-02-2622:23hadilsWhat about one to many relationships?#2020-02-2622:25ghadi"cardinality many" attributes can be asserted in a similar way#2020-02-2622:26hadilsOk#2020-02-2622:26ghadi:friends/with [joe hadils...]#2020-02-2622:26ghadiin the transaction data#2020-02-2622:26hadilsThanks a lot @ghadi !#2020-02-2701:50Jon WalchHas anyone seen this?
clojure.lang.ExceptionInfo: entity, attribute, and new-value must be specified
at datomic.client.api.async$ares.invokeStatic(async.clj:58)
at datomic.client.api.async$ares.invoke(async.clj:54)
at datomic.client.api.sync$eval2142$fn__2147.invoke(sync.clj:84)
at datomic.client.api.protocols$fn__14323$G__14277__14330.invoke(protocols.clj:72)
at datomic.client.api$transact.invokeStatic(api.clj:181)
at datomic.client.api$transact.invoke(api.clj:164)
#2020-02-2701:51Jon WalchGoogle is turning up nothing#2020-02-2701:57ghadiWhat transaction data did you send in @jonwalch ?#2020-02-2702:06Jon Walch@ghadi
[{:db/id 1,
:foo/apple 0N,
:foo/processed-time #inst "2020-02-27T02:00:35.561-00:00"}
{:db/id 2,
:foo/apple 0N,
:foo/processed-time #inst "2020-02-27T02:00:35.562-00:00"}
[:db/cas 3 :user/speaker 0N 100N]
[:db/cas 4 :user/speaker 85N 100N]
[:db/cas 5 :bar/running? true false]
{:db/id 5,
:bar/end-time #inst "2020-02-27T02:00:35.569-00:00",
:bar/result? false}]#2020-02-2702:07ghadiDon’t send in your own :db/id, use tempids#2020-02-2702:07ghadiWhich are strings#2020-02-2702:08ghadiWhat version of the compute stack are you running? CAS with a boolean might be problematic- I forget.#2020-02-2702:09ghadiI should clarify: only send in integer :db/ids that Datomic handed you @jonwalch #2020-02-2702:10Jon Walchthats what those are :+1:#2020-02-2702:10ghadi1 is an integer above#2020-02-2702:10Jon Walchtrying to find my datomic cloud version#2020-02-2702:10ghadiDid you print or prn?#2020-02-2702:10Jon Walchyeah I fuzzed the ids#2020-02-2702:11Jon WalchI should've clarified, my bad#2020-02-2702:11ghadiI’m debugging this from my phone, so try to help me out 🙃#2020-02-2702:12ghadiOk if your ids are legit, then take out the boolean CAS and see if you get an error. If you do, I’ll file a report (need your compute stack versions)#2020-02-2702:13Jon WalchDatomicCloudVersion 8812#2020-02-2702:14Jon WalchComputeCFTVersion 535#2020-02-2702:14Jon WalchIs that what you're looking for?#2020-02-2702:14ghadiThanks#2020-02-2702:14Jon WalchChanging the code, will report back in ~10#2020-02-2702:20Jon Walch@ghadi removed the cas, working flawlessly#2020-02-2702:20Jon Walchthank you so much!#2020-02-2702:23ghadiYou might want to upgrade your stack to latest. I feel like I’ve filed this bug before with the crew but I’ll double check#2020-02-2713:01tatutIn datomic cloud 8812 we had a problem with a big transaction creating entities with multiple levels of nested maps... some children ended up on the wrong parent... the same code on newer version worked correctly. I tried looking at release notes but didn't see bug fixes related to that#2020-02-2715:26hadilsGood morning @ghadi @shaun-mahood! I successfully got an complex atomic transaction working this morning! Thanks again for your help!#2020-02-2715:27ghadiwoot! thanks for following up.#2020-02-2715:27ghadihopefully the first of many!#2020-02-2715:48souenzzohttps://docs.datomic.com/on-prem/pull.html#as-example
This example is wrong
Should be
[(:artist/name :as "Band Name")]
#2020-03-0415:12souenzzobump
it is frustrating for newcomers to fail on running doc examples
I feel uncomfortable recommending something that the thin documentation doesn't work#2020-02-2716:45joshkhapologies for the cross post, but i was wondering if anyone has an answer to this forum post regarding a single transaction of two facts about the same entity resolved by a composite tuple? thanks! https://forum.datomic.com/t/conflict-when-transacting-non-unique-values-for-entities-resolved-by-composite-tuples/1367#2020-02-2810:57jthomsonThat seems like the expected behaviour to me. You're asserting two values for a single-cardinality attribute of one entity, within one transaction. There isn't a concept of "newer" within one transaction, so its a conflict.#2020-02-2810:58jthomsonIf you split this into two transactions, then of course there would be no problem as you assert abc123 and then abc789#2020-02-2900:21Jon WalchI'm not sure these docs are up to date https://docs.datomic.com/cloud/operation/upgrading.html#compute-only-upgrade#2020-02-2900:22Jon WalchSelect "Specify an Amazon S3 template URL:" and enter the CloudFormation template URL for the version that you wish to upgrade to (see Release page for all versions) then click "Next".#2020-02-2900:22Jon WalchI see
Prerequisite - Prepare template
Prepare template
Every stack is based on a template. A template is a JSON or YAML file that contains configuration information about the AWS resources you want to include in the stack.
Use current template
Replace current template
Edit template in designer
#2020-02-2900:22Jon Walchoh i guess one step is skipped#2020-02-2900:43Jon WalchIs it ever necessary to upgrade the root stack?#2020-02-2909:38joshkhi'm accumulating some general questions about Datomic Analytics. where's the best place to post them? perhaps an Analytics category on http://forum.datomic.com would be useful?#2020-03-0118:42hadilsHi. I have an Lambda ion that needs to invoke another one. I am getting the error message:
User: arn:aws:sts::<ACCOUNT-NUMBER>:assumed-role/stackz-dev2-compute-us-west-2/i-0ff451783095066e5 is not authorized to perform: lambda:InvokeFunction on resource: arn:aws:lambda:us-west-2:<ACCOUNT-NUMBER>:function:stackz-dev2-compute-bh-dummy
I have attached lambda:* permission to stackz-dev2-compute-us-west-2 but this does not help. Does anyone have any experience they can share?#2020-03-0118:54hadilsNvm, I figured it out...#2020-03-0201:20pinkfrogI am using dynamodb, and am considering switching to datomic.#2020-03-0201:21pinkfrogI wonder, if datomic will hurt the write and read performance. It seems datomic relies on some ec2 machine sitting in between the client and dynamodb. won’t that be some bottleneck?#2020-03-0201:22pinkfrogother than the versioned history of the entire db and the query flexibility, I wonder what benefits datomic brings to me other than directly using dynamodb.#2020-03-0208:53em@i I switched from dynamodb to datomic, and it’s been great. While the underlying storage is indeed dynamo, you actually gain performance, not lose it, because of how queries are cached and how your application logic runs in Ions with direct access to memory. The query flexibility is great, and with Ions you gain a VPC and best practices for architecting an application, as well as easy integration into the rest of AWS with lambda triggers.#2020-03-0208:55pinkfrogcost-wise, does datomic incur more read/write requests?#2020-03-0208:56pinkfrogI am also concerned that, datomic seems to suffer single-point bottleneck on the ec2 instances. While with vanilla dynamodb, the bottleneck is only on the dynamodb site.#2020-03-0213:48joshkhshould it be possible to have two entities in Datomic with the same :db/ident but different :db/ids?
[{:db/id 111
:db/ident :color/green}
{:db/id 999
:db/ident :color/green}]#2020-03-0213:49ghadino#2020-03-0213:50ghadido you see that in your database?#2020-03-0213:53joshkhi do, and it's causing problems. i have entities referencing what should be the same enumerated value, but are in fact different entities.#2020-03-0213:55favilacould the history DB be involved?#2020-03-0213:55favilahow precisely did you produce those results you posted above?#2020-03-0213:58joshkhnope, history isn't at play. i don't know how or when it happened. we just stumbled upon it today while chasing down a uniqueness conflict in an API.#2020-03-0213:59favilawhat is the result of (d/q '[:find ?e :where [?e :db/ident :color/green]] a-plain-current-db)?#2020-03-0214:01joshkhgood question! the weird thing is that when we query for the ident, we only get one result.#2020-03-0214:01joshkhhowever...
(let [db (client/db)]
[(d/pull db [:*] 55555555555555555)
(d/pull db [:*] 10101010101010101)])
=> [#:db{:id 55555555555555555, :ident :color/green}
#:db{:id 10101010101010101, :ident :color/green}]#2020-03-0214:02favilaare those real entity ids?#2020-03-0214:02favilais this real code?#2020-03-0214:02favilaplease real code only#2020-03-0214:03joshkhthat's real code with the db/ids and db/idents replaced with other values#2020-03-0214:03favilaok, then you should file a support ticket#2020-03-0214:03favila(why the replacement? there’s nothing sensitive about entity ids)#2020-03-0214:03ghadiand buy a lottery ticket at the same time#2020-03-0214:04favilato prepare your ticket, get the full history of datoms for each entity#2020-03-0214:05favilathis is basically “impossible” so you’ve either stumbled on a really amazing bug, or you’re missing something#2020-03-0214:05joshkhregarding sharing db/ids, i'd rather be safe than sorry. i can't think of anything i'd do with one, but you never know. 🙂#2020-03-0214:07favilause real code in your ticket at least. entity id details may matter#2020-03-0214:08joshkhyes of course, always do in support tickets. just not in public channels with work-related db/ids. thanks for your input favila, i'll be sure to include any historical datoms.#2020-03-0216:55souenzzo(let [k1 (keyword "color" "green")
k2 (keyword "color" "green ")]
{:k1 k1
:k2 k2
:pr-str-k1 (pr-str k1)
:pr-str-k2 (pr-str k2)
:equal? (= k1 k2)})
=> {:k1 :color/green,
:k2 :color/green,
:pr-str-k1 ":color/green",
:pr-str-k2 ":color/green ",
:equal? false}#2020-03-0216:56souenzzo"dynamic" keyworkding is evil 😈#2020-03-0218:41joshkhoh for sure, i've been bitten by that in the past. it was the next thing i checked 🙂
(let [db (client/db)
entity-1 (d/pull db [:*] 12345)
entity-2 (d/pull db [:*] 67890)
ns-name (juxt namespace name)]
[(-> entity-1 :db/ident ns-name)
(-> entity-2 :db/ident ns-name)
(= (:db/ident entity-1) (:db/ident entity-2))])
=> [["color" "green"] ["color" "green"] true]
#2020-03-0218:48joshkhand to add to the mystery, i can't query for either entity.
(d/q '{:find [(pull ?e [*])]
:in [$]
:where [[?e :db/ident :color/green]]}
(client/db))
=> []
anywho, ticket opened 🙂#2020-03-0214:01favilaI want to tease apart two problems 1) these enum attrs are not pointing at the entities I expect 2) there’s actually more than one entity with the same :db/ident value.#2020-03-0214:01favilathere are many ways to cause 1 which won’t cause 2#2020-03-0214:01favilaso let’s rule out 2#2020-03-0214:51daemianmackalso a quick check on the content of the db/ident seems in order — does A’s :color/green value really equal B’s :color/green value?#2020-03-0220:29John ContiAnyone know how to report Datomic documentation bugs? I just found that https://docs.datomic.com/cloud/getting-started/get-connected.html has an error datomic client access system is (I think) supposed to be datomic-access -r <AWS Region> client <Datomic System Name>#2020-03-0222:25joshkhThat looks correct to me, although you should be able to include the region if needed. Are you the very latest CLI version? They release an update not too long. #2020-03-0315:32vlaaadHi! is there a way in a transaction to reference txInstant in the same way we can reference tx with "datomic.tx" ?#2020-03-0315:33souenzzo@vlaaad last time that i searched about it i endup with [:db/add "datomic.tx" :dummy-attribute ""] 😞#2020-03-0315:37favilaIn the response :tx-data , look for an assertion of :db/txInstant where the e and the tx of the datom are the same#2020-03-0315:38vlaaadI want to save tx instant on an entity inside transaction#2020-03-0315:38favilaI know, but this is for Enzzo#2020-03-0315:38favilaWhat you want you can’t do#2020-03-0315:38vlaaadah#2020-03-0316:35souenzzoI got it wrong . sorry.#2020-03-0315:37vlaaadhuh? I want to save tx instant on an entity#2020-03-0315:39favilathe tx instant is not available to transaction functions. I’m not sure when the implied tx instant datom assertion is added#2020-03-0315:39favilayou can add it explicitly if you know the txinstant is older than the last tx instant#2020-03-0315:40favilaconsider also referencing the tx instead of copying the instant#2020-03-0315:40vlaaadsomething like [:db/add "my-entity" :my-entity/created-at "datomic.txInstant"] . I just want to have a precise instant because later I might use in as-of queries#2020-03-0315:40favilawhy do you want the date to match instead of transacting your own instant?#2020-03-0315:40vlaaadWould prefer to be able to do it in single transaction, so everything is consistent#2020-03-0315:41favilaI’m talking about a single transaction#2020-03-0315:42vlaaadbecause this “my-entity” is a public release version, sort of like a git tag that is then used by consumers to see data at that time#2020-03-0315:42favila{:db/id "my-entity" :my-entity/creating-tx "datomic.tx"} is one option#2020-03-0315:43favilabut you may be mixing domain time vs time of record#2020-03-0315:43Alex Miller (Clojure team)I don't think you can or should do this? the datom already is in a transaction that will have the txInstant when it's transacted#2020-03-0315:43favila^^^, although ergonomically it’s not accessible as data, only metadata (i.e. can’t get at it with d/pull)#2020-03-0315:44vlaaadyes, and this is a possibility I’m thinking about as well, but using tx id instead of date as a version will make me expose implementation details#2020-03-0315:44favilayour application would not expose tx id, it would follow the ref and expose the txInstant#2020-03-0315:44vlaaadhmmm#2020-03-0315:45favilahttps://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2020-03-0315:45favilaBasically you want to use the TX metadata as your “domain time” for these entities#2020-03-0315:45favilathat may be warranted, but keep in mind as-of and other history features are not designed to manage domain time#2020-03-0315:46favilaso even the use case of “I have a created-at instant on an entity, I now want to use that to see what the db looked like at that moment” is suspect because it is blending those times#2020-03-0315:47favilathis may be fine if you want this domain time to have the same guarantees as your time-of-record, but in that case you should reference the TX directly (or even better use the TX on the datom directly)#2020-03-0315:48vlaaadthanks @favila you sent me in the right direction :+1:#2020-03-0315:49favilaglad to help. This is a subtilty of datomic’s history features it took me a while to internalize#2020-03-0315:49favilalike everyone else I was eager to use it to manage domain time too#2020-03-0315:50favilainterestingly Crux adds domain time as a first class concept on top of what I’m calling “record” time: https://opencrux.com/#2020-03-0315:51favilaIt makes other different tradeoffs vs datomic, but if time-traveling your domain is really import it’s an option to consider#2020-03-0316:19Daniel MasonHi there, I think I may have ran into a bug on datomic-free (using version 0.9.5697)? I've made a small example of it https://github.com/danmason/datomic-close-query but essentially I was using datomic.api/query with a :timeout (in the example I set the timeout to 1ms, and it behaves in the same way) and it appeared to prevent my application from closing properly? Removing the :timeout from the query-map allowed it to exit fine.#2020-03-0316:26Alex Miller (Clojure team)did it close if you wait 1 minute?#2020-03-0316:26Alex Miller (Clojure team)if so, maybe (shutdown-agents) ?#2020-03-0316:27Daniel MasonI was originally running it on something a bit longer and did include (shutdown-agents) (forgot to add that to my little example, but might be worth a try!) but it did continue running longer than a minute. I'll give that a go on this too, however, and get back to you.#2020-03-0316:31Alex Miller (Clojure team)was just a shot in the dark :)#2020-03-0316:32Daniel MasonMhm 🙂 Including (shutdown-agents) , it does continue to run regardless.#2020-03-0317:10magraHi, when the entity got entered twice what is an idiomatic way to merge these two entities. I have entities A and B and want to change all refs that point to B to point to A and then delete (retract-entity) B. I am not a native speaker and fail to find the right keywords to goolge for this. Does anyone know a manual entry or blog post that describes that?#2020-03-0320:43stijnwhen I'm executing clj -A:ion-dev "{:op :push}", I'm seeing the following error
{:command-failed "{:op :push :region us-east-1}",
:causes
({:message "Unable to transform path",
:class ExceptionInfo,
:data
{:home "/github/home",
:prefix "/github/home/.gitlibs/libs",
:resolved-coord
{:git/url "
What does this error mean?#2020-03-0415:35Alex Miller (Clojure team)some Datomic folk are out atm, response might be delayed, but I'll copy into our internal support room#2020-03-0418:54Lucas BarbosaIs there a way to perform a "left anti join" on datomic? I want to find all the entities with a certain attribute whose values is not in a list that I pass in as an argument
For instance, Imagine I have the :order/type attribute, and I want to find all the orders whose :order/type is different than let's say :created and delivered
The argument would be [:created :devivered] , and it could change#2020-03-0419:08favilaif the attribute value you are testing is cardinality-one, the easiest thing IMO is to provide the filter as a set and (not [(contains? ?filter ?v)])#2020-03-0419:09favilaotherwise, you want a negated variant of….this https://stackoverflow.com/questions/43784258/find-entities-whose-ref-to-many-attribute-contains-all-elements-of-input/43808266#43808266#2020-03-0420:07Lucas Barbosathanks#2020-03-0500:00lilactownthis is sort of related to datomic, so I thought I might find people who knew here:
is it possible to statically analyze a query and always know all of the attributes that a query depends on?#2020-03-0505:44favilaNo, as attrs to match can be input, dynamically built, or you can call an arbitrary function#2020-03-0515:47lilactown> dynamically built
can you show me what you mean by that? the other two make sense#2020-03-0515:52favila[?e :use-attr ?attr] [?e ?attr ?v]#2020-03-0515:52lilactowngot it. thank you!#2020-03-0515:52favila[(keyword ?foo ?bar) ?attr] [?e ?attr ?v]#2020-03-0515:53favila[(rand-int 1 1000) ?attr] [?e ?attr ?v] 😏#2020-03-0515:53lilactownhahaha#2020-03-0509:54dmarjenburgh@lvbarbosa @favila I’m struggling with the same problem, but I want to match on multiple attributes and the matches to exclude is stored in datomic as well (not as separate input).
E.g. a list of items:
[
{:item/department "D1" :item/type "A"}
{:item/department "D1" :item/type "B"}
{:item/department "D1" :item/type "C"}
{:item/department "D2" :item/type "B"}
{:item/department "D3" :item/type "A"}
{:item/department "D3" :item/type "C"}
]
And a list of tuples with [dep type]s to hide:
{:item/hidden [["D1" "A"] ["D3" "C"]]}
I got it working with the following query:
(d/q {:query '[:find (pull ?item [...])
:where
[?item :item/department ?dep]
[?item :item/type ?type]
[(tuple ?dep ?type) ?dep+type]
[(q '[:find (set ?hidden)
:where [_ :item/hidden ?hidden]] $) [[?results]]]
(not [(contains? ?hidden ?dep+type)])]
:args [db]})
But I’m not sure about using the set function in the find clause of the subquery (it’s not documented). And I’m not sure if there is an easier/more performant way to do it#2020-03-0514:21favilause distinct instead of set (honestly it might just be an alias for set) https://docs.datomic.com/cloud/query/query-data-reference.html#built-in-aggregates#2020-03-0514:22favilaYour query works? I don’t see how the ?hidden in your last clause is bound#2020-03-0514:23favilaI would move building the hidden set up higher. you can also issue two queries#2020-03-0514:23favilasupplying the output of one as the input to the next#2020-03-0514:24favilathis isn’t great because it can’t make use of indexes#2020-03-0515:24dmarjenburghI adjusted my actual case and typed it over, so it has an error. The ?hidden should be ?results.#2020-03-0515:24dmarjenburghDoes distinct always yield a set? I assumed it would be a seq, like clojure.core/distinct.#2020-03-0515:25dmarjenburghThanks for the feedback, I’ll try different approaches to see if there is a performance difference#2020-03-0515:27favilaconsider an initial filter using the most selective part of the tuple#2020-03-0515:27favilaso that you can make use of any value indexes on item-department or item-type#2020-03-0515:28favilaalternatively, make a composite index and match against that instead#2020-03-0515:28favila(probably a better option anyway)#2020-03-0516:05hawkeyhi, does someone know how to get statistics of Datomic database (total size, size by entity, index size …)?#2020-03-0516:15Joe Lane@hawkey Using the Client API: https://docs.datomic.com/client-api/datomic.client.api.html#var-db-stats#2020-03-0521:22joshkhi have a need to rename two db idents, and then repurpose their old idents as new attributes (which i know isn't recommended). i started by aliasing the old entities with their new idents, and then transacted two new attributes with the old idents as i would like any new attribute definition. one ident was repurposed successfully -- a value type of long. but the other, which was/is a reference, throws an exception: Ident :player/details cannot be used for entity 666, already used for :new-player/details . Is it possible to repurpose an ident which was previously claimed by a reference attribute?#2020-03-0521:48marshallthat sounds like a uniqueness violation#2020-03-0521:49marshalldoes the schema of the one that didnt “repurpose” include a uniqueness attribute?#2020-03-0607:55joshkhthe failed repurposed ident was originally claimed by a ref attribute which was included in a unique composite tuple. i removed the unique constraint on that tuple, but i still get the same exception.#2020-03-0607:55joshkhlooking at the alias, and the tuple attr which refers to that alias, neither have a unique constraint#2020-03-0608:21joshkhso the whole thing looks more like this:
1. :player/details, claimed by a ref attribute, aliased to :new-player/details
2. :player/details+region tuple attribute aliased to :new-player/details+region
3. unique by identity constraint removed from :new-player/details+region tuple attribute
4. transact {:db/ident :player/details :db/valueType :db.type/ref :db/cardinality :db.cardinality/one}
Ident :player/details cannot be used for entity 666, already used for :new-player/details
#2020-03-0609:54grounded_sageWhat would best practice be for say.
Getting a CSV dump from a client each night. Which contains all historical data (that shouldn’t change) and active data that is changing. Committing just the changes to Datomic.
Where the data they provide has ID’s which associate across other CSV’s.
Obviously you could simply query for the set of ID’s in the database and then only transact the new id’s for the data that shouldn’t change. But some data is subject to change at varying frequencies. Whereas others shouldn’t change but the data may be messed up on their end and you also want to somehow capture that.
Would you hash the data associated with the provided id per namespace/csv and transact that? #2020-03-0609:55grounded_sageFraming my question is a bit difficult so I hope people can follow me#2020-03-0611:03grounded_sageSo I guess the model would be this set of data doesn’t change. If it changes transact it but only ever give me the first instance of it (then notify me it changed - probably auditing logic on my side). As for if they mess up their id space.. I guess that problem will surface downstream and we can retract the transaction?#2020-03-0611:55grounded_sageMaybe something like.
:ticket-purchase/active-data Boolean or :ticket-purchase/historic-data Boolean.
Then when the data has changed when say historic is set to true. The data is logged with a false. So you can query the history of ticket id and see all changes even if it is wrong. But query the right one using the true. #2020-03-0615:27vlaaadIs this a bug?
(seq (d/tx-range (get-conn) {:start #inst "2020-03-06T15:14:52.000-00:00"}))
=> ({:t 38
:tx-data [#datom[13194139533350 50 #inst "2020-03-05T14:24:23.642-00:00" 13194139533350 true] ...]}
{:t 39
:tx-data [#datom[13194139533351 50 #inst "2020-03-06T15:14:52.119-00:00" 13194139533351 true] ...]})
(entity 50 is :db/txInstant , so a request for txs since today includes tx from yesterday)#2020-03-0909:29vlaaadAh, I understood my mistake: start point 2020-03-06T15:14:52.000 is before the second returned tx 2020-03-06T15:14:52.119 (notice the millis), so it returns previous transaction which makes sense :+1:#2020-03-0616:23donyormSo I'm trying to setup of my project to work with ions in datomic-cloud, but I've been having issues with it crashing. I tried to reproduce locally, using the dependencies a ion push operation prints, but I'm getting the following error
Error building classpath. Could not find artifact com.cognitect:s3-creds:jar:0.1.23 in central ()
Any idea why that library can't be found?#2020-03-0617:41joshkhdo you have the following in your deps.edn?
:mvn/repos {"datomic-cloud" {:url ""}
#2020-03-0617:42donyormYes I do#2020-03-0617:45joshkhdo you get the error when you deploy your code? or does it happen when you run your project?#2020-03-0620:51donyormIt happens when I run the project, but only when I include the list of dependencies given when you push. I don't have my terminal up now, but it's something like "overridden dependencies"#2020-03-2319:59Jon WalchI'd like to avoid returning nearly all foo s from the db in this query#2020-03-2320:02ghadi[?target :foo/end-time ?end-time]
[(> ?end-time threshold)]
and pass in e.g. yesterday as the threshold?#2020-03-2320:02Jon WalchCool, was wondering if there was a better way, but this will do fine. Thanks!#2020-03-2320:50marshallyou could use a subquery#2020-03-2320:51marshalldepending on how ‘unique’ your max value is#2020-03-2320:51marshallhttps://stackoverflow.com/questions/23215114/datomic-aggregates-usage/30771949#30771949#2020-03-2320:51marshallfind the max (or min) value in the inner query, use it find the db/id (or whatever else) in the outer query#2020-03-2321:55Jon WalchOh nice! I'll give this a shot too.#2020-03-2403:33Nolanif an ion fetches an ssm parameter as in the event-example: https://github.com/Datomic/ion-event-example/blob/master/src/datomic/ion/event_example.clj#L40-L48
(def get-params
,,,
(memoize #(,,, (ion/get-params ,,,))))
when can you expect that to get recomputed? e.g. after next deployment, after next upgrade, never, etc.?#2020-03-2405:57em@nolan If you've memoized that function like that it'll update each time the process is cycled so an ion deploy will work to refresh. ion/get-params is just a wrapper around https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParametersByPath.html if you want more details
Although it might be a better idea just to write your own version. Be careful about exceeding 10 http://p.ararameters per deploy group, your app might explode if you don't explicitly work around the 10 results limit in this API call, as last I remember ion/get-params does not implement the logic to keep calling next-token until all params are fetched.#2020-03-2406:00Nolanah! really appreciate the input @eagonmeng. affirms my suspicions (and hopes, desires, etc. 🙏), and makes a lot of sense. also appreciate the additional info re: parameter limit, wont be relevant here, but super good to know.#2020-03-2406:33Nolanas a slightly tangential follow-up, does memoizing the call to ion/get-env make any material difference? or is that an in-process access anyway (only really worried about production, if that matters)?#2020-03-2413:32marshallhttps://softwareengineeringdaily.com/2020/03/24/datomic-architecture-with-marshall-thompson/
(little shameless self promotion 🙂 )#2020-03-2415:56BrianQuery question: I have an entity called Interaction with an attribute called :devices which is of cardinality many. Those devices are eids. Given a device eid, I'd like to check to see if that device eid appears in an Interaction's :devices and then return all the other device eids. Here's what I have so far:
'[:find ?devices
:in $ ?dev-eid
:where
[?interaction :devices ?dev-eid]
[?interaction :devices ?devices]
]
However the problem is that my original ?dev-eid is also inside ?devices . I could filter it out after the query, but I feel like it would be better practice to include that filtering in the query (correct me if I'm wrong on that). Additional info: there are only ever 2 eids in any :devices . How can I remove ?dev-eid from ?devices inside my query? Something like "grab all ?devices which are not equal to ?dev-eid ".#2020-03-2416:10favilaAdd
[(!= ?devices ?dev-eid)]
#2020-03-2416:14BrianPerfect thank you @U09R86PA4! One more (I think) simple thing. I end up only getting a single ?device when I do this. However if I return ?devices ?interaction I end up getting the 4 that I expect (but they are each paired with the ?interaction eid I don't want). It seems like the query just grabs the first one and returns it. How can I have it grab them all?#2020-03-2416:15favilaare the four you expect the same?#2020-03-2416:15favilaquery normally returns sets, so if the same device appears 4 times it won’t matter, you will get one device#2020-03-2416:18favilayou can either include :find ?interaction to get the devices per interaction, or use :with ?interaction to include it for the set but then have it removed before returning. queryies with :`with` do not return sets#2020-03-2416:19BrianYou were right! The same device appeared multiple times. The data model was slightly different than I expected. I'm getting exactly what I want now =]#2020-03-2422:49Ben HammondHi.
I'm running datomic-pro-0.9.5697 local transactor and then datomic.peer-server and then datomic.client.api/connect to make the actual connection.
It's been working fine for ages; but I've just started to see
Reflection warning, cognitect/hmac_authn.clj:80:12 - call to static method encodeHex on org.apache.commons.codec.binary.Hex can't be resolved (argument types: unknown, java.lang.Boolean).
Reflection warning, cognitect/hmac_authn.clj:80:3 - call to java.lang.String ctor can't be resolved.
warnings and now
Caused by: clojure.lang.ExceptionInfo: No name matching localhost found
{:cognitect.anomalies/category :cognitect.anomalies/fault, :cognitect.anomalies/message "No name matching localhost found", :cognitect.http-client/throwable #error {
:cause "No name matching localhost found"
:via
[{:type .ssl.SSLHandshakeException
:message "No name matching localhost found"
:at [sun.security.ssl.Alert createSSLException "Alert.java" 128]}
{:type java.security.cert.CertificateException
:message "No name matching localhost found"
:at [sun.security.util.HostnameChecker matchDNS "HostnameChecker.java" 225]}]
:trace
[[sun.security.util.HostnameChecker matchDNS "HostnameChecker.java" 225]
[sun.security.util.HostnameChecker match "HostnameChecker.java" 98]
[sun.security.ssl.X509TrustManagerImpl checkIdentity "X509TrustManagerImpl.java" 459]
...
at datomic.client.api.async$ares.invokeStatic (async.clj:56)
datomic.client.api.async$ares.invoke (async.clj:52)
datomic.client.api.sync.Client.connect (sync.clj:71)
datomic.client.api$connect.invokeStatic (api.clj:118)
datomic.client.api$connect.invoke (api.clj:105)
errors
I'm not aware of changing anything; the SSLHandshakeException makes me wonder if some certificate has expired.
I don't see any errors reported on the transactor log or in the peer server console#2020-03-2422:54Ben Hammondoh I just found https://forum.datomic.com/t/ssl-handshake-error-when-connecting-to-peer-server-locally/1067/7#2020-03-2423:02Ben HammondHmmm, naively adding
:validate-hostnames false
didn't seem to help#2020-03-2423:02Ben Hammondhaven't tried updating datomic binaries though#2020-03-2423:06Ben Hammondjust for reference, I am trying this
(datomic.client.api/connect
(datomic.client.api/client {:server-type :peer-server,
:access-key "myaccesskey",
:secret "mysecret",
:endpoint "localhost:8998",
:validate-hostnames false})
{:db-name "xiangqi"
:validate-hostnames false}
)
and I get
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:56).
No name matching localhost found
#2020-03-2423:24Ben Hammondupdating from
com.datomic/client-pro {:mvn/version "0.8.28"}
to
com.datomic/client-pro {:mvn/version "0.9.43"}
seems to have done the trick#2020-03-2423:27Ben Hammondtransactor/peer server did not need upgrading; just the client#2020-03-2501:56derpocioushey all! Anyone have a simple example of subscribing to changes of a datomic db? 🙏#2020-03-2501:58derpociousAlso, is there any common lein or boot templates to quickly scaffold out a CRUD backend for datomic (ideally a CRUDS starter template with subscriptions built-in as well!) thanks!#2020-03-2513:37adamfeldmanAs far as backends go, the closest I’m aware of is the (alpha) Datomic plugin for Pathom Connect (EDIT: see following message for other good options!) https://github.com/wilkerlucio/pathom-datomic, https://github.com/wilkerlucio/pathom.
If you’re looking for a frontend or full-stack solution, there’s Fulcro RAD (also in alpha and under rapid development) https://github.com/fulcrologic/fulcro-rad-demo, https://github.com/fulcrologic/fulcro-rad-datomic, https://github.com/fulcrologic/fulcro-rad. Note that Fulcro also uses Pathom internally, but RAD uses its own Datomic adapter.
There exist nice ways of adding websockets to Fulcro (enabled via Sente), but I don’t belive it’s currently pre-integrated with Fulcro RAD etc https://github.com/fulcrologic/fulcro-websockets#2020-03-2513:44adamfeldmanIf you already have the frontend settled, another option may be to stand up a GraphQL server. Lots of ways to do that.
Over Postgres (or Yugabyte…) with https://hasura.io (supports subscriptions out of the box). To bridge Clojure(script) and GraphQL, you might use https://wilkerlucio.github.io/pathom/#GraphQL.
There’s also this new, pure-Clojure GraphQL API creator for MySQL/Postgres https://github.com/graphqlize/graphqlize
Hodur has an “experimental” adapter for declaratively creating a GraphQL API over Datomic (and lots more cool stuff) https://github.com/hodur-org/hodur-engine#2020-03-2517:01derpociousThanks @UCHV4JZ7A! Interesting, I've never heard of Panthom or "EQL"#2020-03-2517:02derpociousI thought you could already ask for pieces of data with the standard Datomic api. If that is the case then isn't graphQL kind of unnecessary? 🤔#2020-03-2517:14adamfeldmanI think one way to explain it is that EQL+Pathom or GraphQL both serve to decouple your frontend from the schema within your database. Datomic (as with all? SQL databases) is not intended to be exposed directly to a frontend client.
This talk from the creator of Pathom may be enlightening: https://www.youtube.com/watch?v=IS3i3DTUnAI. There are a few other good Pathom (and Fulcro) talks out there from the same person#2020-03-2517:17adamfeldmanIt’s possible you and I are dreaming about the same kind of thing — easily pull DB data from a SPA-style web frontend, with minimal setup. The closest I’ve found outside Clojure-land is https://github.com/marmelab/react-admin + https://github.com/hasura/ra-data-hasura.
As it matures, I expect Fulcro RAD will enable a similar experience to that, but built on better and more flexible primitives that you can’t ever outgrow#2020-03-2517:22adamfeldman(“Better” to me means a few things. React-admin is made out of the typical mish-mosh of tools from the react ecosystem, which I find to be highly “inconsistent”, with the added bonus of high churn and the resulting breakage. Fulcro RAD is still just Fulcro, which I find to be the closest thing to an internally-consistent framework in the Clojure(script) ecosystem — an ecosystem which itself prizes stability)#2020-03-2504:08stuartrexkingThis is a question about datalog. I posted this on #datalog but there isn't my activity there. I appreciate any help as I'd like to understand this.#2020-03-2504:14stuartrexkingIs it a join? I think it's a join.#2020-03-2507:24em@stuartrexking You probably shouldn't think of the function call so much as assignment as just binding. So yeah, it's just joining on different people with the same birth date. Another way to think about it is imagine if there was a :person/born-month and a :person/born-date attribute, you could replace the function call clauses with something like [?p1 :person/born-month ?m] [?p2 :person/born-month ?m]#2020-03-2507:25stuartrexkingAh. That makes sense. Thank you. #2020-03-2600:54SvenI’ve noticed that when I save a float e.g 200.9 then it is saved in the database (or at least displayed when I query the value) as 200.899993896484. Is that a feature and if so then why?#2020-03-2601:34favilaThat is just how floats work. They cannot represent every decimal fraction precisely. Use bigdecimal type if you need exact decimal precision#2020-03-2601:41Svenok, thanks. I guess for things like latitude/longitude, financial values etc. it would be wiser to use bigdecimal. For my simple use cases float has worked fine but the issue I have now is that each time I update an entity which has a float attribute then a change is triggered even if the value does not actually change (`200.9` provided vs 200.08999 in db). I would not mind it but I provide user with an activity log for each entity and now this float value change appears in every transaction. This was unexpected 😕#2020-03-2602:57Braden Shepherdsonit's worth mentioning that financial data often works with (64-bit integer) millicents or microcents, to avoid exactly this kind of problem.#2020-03-2602:58Braden Shepherdsonit means rounding, but no one cares if they win or lose a rounding to the thousandth of a penny.#2020-03-2606:26mavbozohttps://floating-point-gui.de/basic/#2020-03-2606:28mavbozoi've faced problems with purchase amount in mysql because it was stored as float#2020-03-2611:15teodorluFor lat/long, I think floats are fine (assuming you aren't doing advanced GPS stuff). Just beware how much precision you actually want when displaying the numbers!#2020-03-2616:55vYou have two options
1. use double as the name implies, has 2x the precision of float.
2. Recommended: save all your number in long format i.e 2009 and when you need to convert it you can always divide and get the value you desire. It also preserves the original value i.e no floating precision errors#2020-03-2611:26motformHi, I’m now to datomic and have run into a problem where it feels like there would be some kind of (at least semi-) best practice. I have a set of e of type foo in my database, and I want to filter with user input. foo has a few a that the user can include in their filter, including cases where you can filter by multiple enumerations of a specific a, but I’m struggeling to find a good way to do a conditional query. If I make a query that expects all possible as as :in, then (I think) it only matches when all the inputs are valid. I also sort of of feel like this is a place where I should use the pull api, but I have not found a way to run a pull over all e of foo.#2020-03-2611:28motformHope the explanation makes sense! I’m running datomic free on-prem, so I’ve not look at the client api.#2020-03-2614:01Joe LaneWhich number do you want:
1. "All e of type foo which have a1 OR a2 ?"
2. "All e of type foo which have a1 AND ?"#2020-03-2614:03motform2-ish. “All e of type foo which might have a1 AND a2 and a..n depending on user input”#2020-03-2614:06val_waeselynck@UUPC4CHEZ your requirement is still ambiguous to me (what does "might have" mean?) but this may help: https://stackoverflow.com/questions/43784258/find-entities-whose-ref-to-many-attribute-contains-all-elements-of-input#2020-03-2614:08motformThat looks interesting, will check it out! In this concrete case, I have the e :cocktail that have a like :ingredient :title :author, the end-user should be albe to build a search query that can, but does not have to, include filtering by these#2020-03-2614:09Joe LaneHow many a exist ( or will exist? )#2020-03-2614:10motforma known amount, all :cocktail have the same data in this set#2020-03-2614:11Joe LaneAre you referring to a as an attribute for the entity?#2020-03-2614:12motformyes, is that not correct? all cocktail enties have the same complete set of attributes#2020-03-2614:14Joe Lane{:type :cocktails
:cocktail "Martini"
:ingredients ["Vodka", "vermouth"]
:author "Unknown"}#2020-03-2614:15Joe LaneAnd you want the ability to search for drinks which take Vodka#2020-03-2614:16motformYes! and that I can do, but the user should be able to search for cocktails that contain vodka, cream and have the word “russian” in fulltext#2020-03-2614:16motform(d/q '[:find [(pull ?e [:cocktail/id :cocktail/title :cocktail/recipe :cocktail/preparation :cocktail/ingredients]) ...]
:in $ [?ingredients ...] [?search ...]
:where
[?e :cocktail/ingredients ?ingredients]
[(fulltext $ :cocktail/fulltext ?search) [[?e ?n]]]]
(d/db conn) ingredients fulltext)#2020-03-2614:17Joe LaneStart by building the query up as data using the cond-> macro. I cant help any more right now but I can later tonight.#2020-03-2614:18motformis how far I got. its a concrete version of only qing two a, but it fails if any of the two vecs are empty, as it has nothing to match on#2020-03-2614:18motformah, cond->! I don’t think I’ve used that one before. thank you so much for all your help, will look into that!#2020-03-2614:22val_waeselynckWhen generating queries, it's usually more convenient to do it in map form:
{:find [...] :in [...] :where [...]}#2020-03-2614:24motformoh, can you pass maps to q? that makes life a lot easier lol#2020-03-2616:41alidlorenzoFor Datomic Cloud is it recommended to have one system per env? (dev, staging, prod)
I ask bc the ion-config.dev requires an app-name so not sure how to dynamically configure that based on environment.
I found a related question in forum (https://forum.datomic.com/t/parameterizing-ion-configuration/479) but no conclusive answer from anyone on it, so I’m wondering how people are handling different environments.#2020-03-2617:49marshallhttps://docs.datomic.com/cloud/operation/planning.html#2020-03-2617:50marshallyou can either have one or multiple systems
you can configure your environments with ion environment maps and parameters https://docs.datomic.com/cloud/ions/ions-reference.html#environment-map#2020-03-2618:47alidlorenzosay I want multiple systems - so I’d keep the app/system name the same (but give them different environment maps?
i didn’t think aws would allow multiple systems with same name but i’ll go ahead and try it#2020-03-2618:48alidlorenzooh maybe the system name has to be different but the app-name can be the same (i guess that’s why that option exist)#2020-03-2618:48marshallcorrect ^#2020-03-2618:49alidlorenzothat clarifies a lot, thanks#2020-03-2619:23joshkhcan i call d/with on the results of calling d/with ?#2020-03-2619:37val_waeselynckYes, and a lot of power follows from that :)#2020-03-2619:43joshkhright. i'm sure i've done this before, but i'm drawing a blank. d/with requires a d/with-db conn, but the result of d/with doesn't return a connection. does it?#2020-03-2619:51joshkhoh of course. :db-after is already with-db'ed.#2020-03-2621:21johnjDoes the peer uses connection pooling when using a SQL database?#2020-03-2717:23johnjis there such thing as too many entities references in a cardinality many attribute?#2020-03-2717:49motformI have another newbie question about the map form for queries, regarding quoting as it feels like I have misunderstood something.
(d/query '{:query {:find [?e]
:in [$ ?title]
:where [[?e :title ?title]]}
:args [(d/db conn) "negroni"]})
gives me the error nth not supported on this type: Symbol. When I quote the nested query map, it works (not surprisingly, as we don’t wanna eval all the datalog symbols that it complains about if i leave the whole thing unquoted). This works in the repl, but I don’t see how I would write code that does this to the query map without reaching for a bunch of map manipulation, the thought of which gives me that feeling that I’m doing something wrong.#2020-03-2717:50motformWhat confuses me is that in the docs, nothing is quoted. when querying with the map form.
https://docs.datomic.com/cloud/query/query-executing.html
It also shows the nested query map form for a /q and not /query invocation, which also confuses me, as i thought /q wanted a flat map and args as & args , supplied directly to the function#2020-03-2718:02skuttlemanI don't think you want to quote the args, right?
(d/query {:query '{:find [?e]
:in [$ ?title]
:where [[?e :title ?title]]}
:args [(d/db conn) "negroni"]})#2020-03-2718:05motformNo, I guess not, but how do I only quote the query-map? This is the interface to the database, where parse-strainer takes a map of user input and builds a query-map through a cond-> pipeline
(defn strain [strainer]
(let [query (parse-strainer strainer)]
(d/query query)))#2020-03-2718:07favilaThe purpose of quoting is so symbols like ?e don’t get expanded to current.ns/?e and lists like (some-rule) in the query don’t get evaluated.#2020-03-2718:08favilayou can accomplish the same clause by clause, or using forms like (list 'some-rule '?foo ?bar) or even ('some-rule '?foo ~'?bar)`#2020-03-2718:08favilaThis is normal Clojure quoting, it’s not specific to datomic#2020-03-2718:09favilathe query just wants to see literal symbols ?e etc#2020-03-2718:43motformyeah ok, that makes sense of course, I guess I don’t quote that much in Clojure otherwise.#2020-03-2718:43motformBut what I still don’t get is how they want to to use the api. If I have a fn that spits out the following map so that it might then be used to call d/query with, how do I quote only the :query submap?
{:query {:find [?e]
:in [$ ?title]
:where [[?e :title ?title]]}
:args [(d/db conn) "negroni"]}
(update m :query quote) evals the symbols#2020-03-2718:51favilaif you already have the query map, why are you quoting it again?#2020-03-2718:52motformim not! which is where I get confused, haha#2020-03-2718:52motform(defn strain [strainer]
(let [query (parse-strainer strainer)]
(d/query query)))#2020-03-2718:52motformis my end-point#2020-03-2718:52favilaif that is actually what your function returns, then you are done#2020-03-2718:52favilajust hand that map to d/q#2020-03-2718:53motformmy fn
(defn- parse-strainer [{:keys [:ingredients :search :type]}]
(cond-> base-query
ingredients (simple-query :ingredients '?ingredients ingredients)
type (simple-query :type '?type type)
search (fn-query :fulltext '?fulltext 'fulltext '$ search)))
(parse-strainer {:ingredients ["vodka" "cream"] :search ["russian"]})
{:query
{:find [(pull ?e [:id :title])],
:in [$ [?ingredients ...] [?fulltext ...]],
:where
[[?e :ingredients ?ingredients]
[(fulltext $ :fulltext ?fulltext) [[?e ?n]]]]},
:args [(d/db @*conn) ["vodka" "cream"] ["russian"]]}#2020-03-2718:55motformbut when i do (strain {:ingredients ["vodka" "cream"] :search ["russian"]}) i get
Execution error (UnsupportedOperationException) at datomic.datalog/extrel-coll$fn (datalog.clj:300).
nth not supported on this type: Symbol#2020-03-2718:55favilawhere do your args come from?#2020-03-2718:56favilayour (d/db @*conn) is a literal list with d/db and conn elements#2020-03-2718:57favilaI think it’s trying to use it as a vector-type datasource, but d/db doesn’t support nth#2020-03-2718:57motformoh shoot, so i should somewhere do like (def db (d/db conn) and have :arg [db [“foo”] [“bar”]?#2020-03-2718:57favila“arg” is not syntax, it is real objects#2020-03-2718:58favila(d/q query arg1 arg2) and (d/q {:query query :args [arg1 arg2]}) are equivalent#2020-03-2719:00favilacan you show the code that builds args?#2020-03-2719:02motformah, of course! I did not think of that at all#2020-03-2719:02motform“its just data”#2020-03-2719:03motform(defn strain [strainer]
(let [{:keys [query args]} (parse-strainer strainer)]
(apply d/q query (d/db @*conn) args)))
now works! before I hade a base map for my query that contained :args [(d/db @*conn)]that i then conjed onto the other args to#2020-03-2719:05favilayou can still do that, just don’t quote the args in your base map#2020-03-2719:05favilathe problem is that the db was literally a list, instead of the db object, so some quoting was going on that shouldn’t have.#2020-03-2719:06motformi get that now, thank you so much for your help!#2020-03-2719:07motformi guess i just assumed that i would be invoked somewhere along the line, don’t think i’ve encountered this before#2020-03-2719:07motformdatomic is the only place outside of macros where i’ve actually come across quoting#2020-03-2719:08motformis there a best prefered way to pass the db argument around? in my cases, I’ve used (d/db @*conn) where conn is an atom holding a (d/connect uri), would it be “better” if it was just a var with a (d/db conn)?#2020-03-2719:09favilahttps://docs.datomic.com/on-prem/best-practices.html#consistent-db-value-for-unit-of-work#2020-03-2719:09favilalike any code, avoid mutables#2020-03-2719:09favilaconn is a mutable#2020-03-2719:09faviladb is not#2020-03-2719:10favilaalso, it allows some priviledge scoping: anyone can transact with a conn, but not with a db#2020-03-2719:16motformthat makes sense, thanks!#2020-03-2719:16motformI guess i should just RT rest of the FM, that best practice page was really good!#2020-03-2719:17motformdo you have any good open source reference projects that use datomic that one could look at?#2020-03-2719:32favilaI know lots of libraries, but I can’t think of any apps offhand#2020-03-2809:44motformno probs, I’m super thankful for all the help already! : )#2020-03-2809:56motformCan I have one last quoting question? In my query, I want to pull and bind-coll with …, which i can add to my q no problems,
{:query
{:find
[(pull ?e [:id :title :recipe :preparation :ingredients]) ...],
:in [$ [?ingredients ...] [?fulltext ...]],
:where
[[?e :ingredients ?ingredients]
[(fulltext $ :fulltext ?fulltext) [[?e ?n]]]]},
:args [#{"gin" "rum"} #{"russian"}]}
However, when I run this, it tells me that Argument ... in :find is not a variable, despite that fact that it can handle the … in the :in clause#2020-03-2810:00motformDoes it have something to do with the fact the the other invocations of … are nested? should not, right?#2020-03-2811:53favilaThis is a peer vs client api difference. Only the former supports destructuring in :find#2020-03-2813:09motformHm, then that’s strange, I thought that Datomic free only used the peer api? It works in my hand written queries, which is what made me confused#2020-03-2816:01favilaYou are using free not cloud?#2020-03-2816:01favilaD/q with map arg is a client api thing, so I assumed you were using cloud#2020-03-2818:54motformHaha, no, I’m a free leecher for the moment. d/q takes args in map or vector form, d/query take maps with :query and :args keys, as Ive understood it. I have no clue about how the client api, have not researched that yet#2020-03-2822:49favilaNvm, this is the map form of query, so you need an extra vector#2020-03-2822:51favila[:find a b c] => {:find [a b c]}. So [:find [a ...]] => {:find [[a ...]]}#2020-03-2718:24johnjis transaction functions a common way to achieve referential integrity in datomic?#2020-03-2718:44arohnerTried to use insecure HTTP repository without TLS:
project:
com/datomic/datomic-lucene-core/3.3.0/datomic-lucene-core-3.3.0.jar
This is almost certainly a mistake; for details see
#2020-03-2718:50arohnerI see that the datomic-pro-0.9-6024.pom contains
<repository>
<id>project</id>
<url></url>
#2020-03-2718:54Alex Miller (Clojure team)I think they were fixing up some stuff like this recently iirc, but I'm not on the team#2020-03-2718:54Alex Miller (Clojure team)not sure if anyone is watching here atm#2020-03-2718:55arohnerI think I figured it out#2020-03-2718:55arohnerwe’re hosting datomic-pro.jar in a private S3 repo. datomic-lucene-core also needs to be there. It wasn’t found in central, so it tried all other repos, and then complained about the http://#2020-03-2718:56Alex Miller (Clojure team)ah, yes that is a common error reporting gotcha#2020-03-2718:56Alex Miller (Clojure team)if it looks in N places and doesn't find it, it just reports the first or last place it looked as it doesn't know where it was expected to find it#2020-03-2718:57Alex Miller (Clojure team)and often that is different than your expectation#2020-03-2722:44emFor Datomic Ions, are there hooks for system (component, integrant, etc.) start/stop? Wondering about best practices here#2020-03-2815:48cjsauerIn the past I’ve placed start calls at the beginning of each HTTP request, right before handing the request to the app handler. Calls to start are idempotent and are essentially no-ops if things are already started. #2020-03-2815:50cjsauer@UNRDXKBNY so it’s basically side-effecting middleware. #2020-03-2815:51cjsauerThis gives you a well-defined place to assoc the started system onto the request map as well. #2020-03-2819:33em@U6GFE9HS7 Yeah, I've basically done the same for system start, though was a little worried about runtime performance and how much the no-op check would cost.
But are there any good solutions for stop? Wondering about how to manage integrations like AWS API gateway and websockets, and having notifications on process cycling or auto-scaling#2020-03-2819:45cjsauerHm, I haven’t needed stop hooks myself. I wonder if you could tie into the Simple Workflow (SWF) hooks that Datomic sets up for coordinating deploys. #2020-03-2820:54emOoh, didn't know about that part of Datomic (thanks for the pointer!), though it makes sense that something like SWF was behind coordinating deploys. I wonder if there's some buried api for this that wasn't too much trouble/not supported to hook into#2020-03-2818:30yuHi everyone! So I'm starting out with datomic and I have already successfully connected datomic-console to a local datomic-transactor instance, but for a remote one (that uses digitalocean droplet for the transactor & heroku-postgres for storage) I'm having a hardtime, I have tried the following commands:
bin/console -p 9000 sql datomic:sql://?jdbc:postgresql://<heroku-postgres-host>:5432/<heroku-postgres-db>?user=<postgres-user>&password=<postgres-password>&ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory
bin/console -p 9000 sql datomic:sql://?jdbc:postgresql://<heroku-postgres-host>:5432/<heroku-postgres-db>?user=<postgres-user>&password=<postgres-password>
For both when I open datomic-console, it shows the following error msg:
FATAL: no pg_hba.conf entry for host <my-ip-address>, user <heroku-postgres-user>, database <heroku-postgres-password>, SSL off trying to connect to datomic:sql://?jdbc:postgresql://<heroku-postgres-host>:5432/<heroku-postgres-db>?user=<postgres-user-name>, make sure transactor is running
Using the URI in the first command I was able to create a database from the repl, so I can confirm that the transactor is working.
So, does anyone know what I'm still missing here? Would appreciate any help, thanks!#2020-03-2819:43yuFound the solution, just needed to wrap the uri with double quotes "<datomic-uri>"#2020-03-3015:57robert-stuttafordwe're indexing every 30 minutes right now 😄 should we do anything, e.g. increase any live index thresholds?#2020-03-3016:54Joe LaneOn Prem or cloud?#2020-03-3017:25robert-stuttafordon prem#2020-03-3017:25robert-stuttafordit's handling just fine, more curious than anything#2020-03-3019:34lwhortona while back i remember watching a datomic video where someone (stu?) was showing the power of dataolog. they were demonstrating how you could debugging a slow running query in datomic without knowing anything about a particular domain by simply moving around the order of the :where clauses. does anyone else remember this? maybe it was inside a day-of-datomic series?#2020-03-3019:38lwhortonaha! i have found the thing, https://github.com/Datomic/day-of-datomic/blob/master/tutorial/decomposing_a_query.clj
but there’s definitely a video out there to go along with this …#2020-03-3019:45robert-stuttafordcheck the http://datomic.com videos page @U0W0JDY4C 😃#2020-03-3121:05Braden Shepherdsondoes datomic limit the maximum size of :db.type/string values? what about bytes?#2020-03-3121:17favilaon-prem: no, unless it’s a string in a tuple, then it’s 256 chars; cloud: strings are 4096 chars outside tuples and 256 chars inside, and it doesn’t have bytes.#2020-03-3121:17favilathese are documented on the on-prem and cloud schema pages#2020-03-3121:20Braden Shepherdsonthanks!#2020-03-3121:06Braden ShepherdsonI realize it becomes inefficient, and it's not really recommended to store a blog post in a string value, but I'm wondering if there's a formal limit.#2020-04-0409:12John LeidegrenI could be wrong about this but the idea is to store structure, so you would store maybe each paragraph as a string and not actually whe whole body?
This does create another problem where order is important. At which point, and depending on what you want to do, you might consider actually storing a linked data structure.
Where paragraphs have next pointers. You use pull with recursive patterns to bring in the model. Though, it's a bit odd maybe because you get a hierarchical model back where you might have expected a list of paragraphs.
I'm just thinking out loud here, but I think this is kinda what Datomic wants you to do... or at least I get that feeling from now and then that you should fully embrace the graph like nature of Datomic.#2020-04-0106:56teodorluI've always interpreted Rich's The Language of the System as speaking about Clojure. Clojure is meant to be part of some whole. Clojure is not meant to become its own island that can only send messages to other islands. Rather, Clojure can fit nicely as part of a river, and work effectively even if it relies on some upstream source of truth.
Does The Language of the System also describe the use of Datomic?#2020-04-0112:24Alex Miller (Clojure team)It’s intentionally not about either #2020-04-0112:25Alex Miller (Clojure team)And also about both of course#2020-04-0112:48teodorluThat seems right to me.
I think my brain would hurt less with examples / stories where Datomic was used to solve observed problems. I've seen most talks I've found about Datomic, though.
Context: In my daily work with Java, I suspect that we're suffering from a "situatedness", where teams become islands with their own vaguely different entity definitions. I'm not precisely sure what the solution to that should be. But I suspect that understanding Datomic better could help me improve our state. So far, thinking about RDF and specs with global names has been helpful.#2020-04-0113:04Alex Miller (Clojure team)those sound like good ideas to me#2020-04-0116:24johnjWhat has proved to be better practice, to try to keep all attributes of a entity use the same namespace or to mix various namespaces in an entity? the latter looks messy to me#2020-04-0116:24johnjWhat has proved to be better practice, to try to keep all attributes of a entity use the same namespace or to mix various namespaces in an entity? the latter looks messy to me#2020-04-0116:27favilathe meaning of the attrs themselves should guide that imo#2020-04-0116:27favilawith the usual caveat against premature abstraction: there may be a difference of meaning you haven’t discovered yet#2020-04-0116:31johnjto clarify, you are saying namespaces should not scope an "entity type" correct?#2020-04-0116:32johnjhttps://github.com/Datomic/mbrainz-sample/blob/master/relationships.png#2020-04-0116:32johnjthat example uses single ns for entities#2020-04-0116:36favilaor maybe it shows co-occurence clusters of attributes on entities? troll#2020-04-0116:37favilawhat I mean is, for example, if you have an attribute with the same meaning no matter what entity “type” it appears on, don’t split it into multiple attributes just so entities never have attributes from other namespaces#2020-04-0116:38faviladata-modeling-wise, focus on attributes and their meanings, build entity “types” later, if that’s even necessary#2020-04-0116:41favilaIn sql ERDs, you might engage in “polymorphic joined-table inheritance” tricks to share columns across tables, or simply copy the same column name into multiple tables. This doesn’t make sense in datomic since you can assert any attr on anything.#2020-04-0116:45johnjfair, I guess I'm getting to hung up in making entities look "elegant" instead of applying common sense but I get your point: namespaces scope a set of attributes, not "entity types"#2020-04-0116:47favilawhat you really want (in datomic) are entity specs rather than types: https://docs.datomic.com/on-prem/schema.html#entity-specs#2020-04-0116:48favilai.e., what are the things I could use an entity for and what constraints do I expect to hold#2020-04-0116:49favilabut an entity may fulfill many specs at once, or some only some of the time#2020-04-0116:49favilaso again, it’s contextual, not baked in to the entity itself as a type#2020-04-0116:52favilaspecs give you: required attrs (:db.entity/attrs), and enforcing cross-fact constraints (:db.entity/preds)#2020-04-0116:53favilathere are also attribute predicates which constrain attribute values further than their type (:db.attr/types), but they are on the attribute not entities, so again they’re expected to be universal#2020-04-0116:54favilathis brings an annoying modeling limitation where you may want an attr with a universal meaning, but contextually want it to have a narrower possible range of values. this is common when dealing with data modeled by XML schema or OOP-ish type systems, where some “refinement” mechanism is used commonly#2020-04-0116:55favilain datomic, you have to decide whether to leave some attr constraints a bit loose and untyped, or split each refinement into a different attr and lose some universality#2020-04-0116:58teodorlu> fair, I guess I'm getting to hung up in making entities look "elegant" instead of applying common sense but I get your point: namespaces scope a set of attributes, not "entity types"
How about making your system for namespacing relations elegant instead?#2020-04-0117:02johnj@U09R86PA4 was reading the attr-preds and entity spec stuff and see what you mean, helpful thanks.#2020-04-0117:04johnjI know they have different purpose, but couldn't entity predicates be use as attribute predicates too? and you get to choose at transaction time when to apply them#2020-04-0117:04johnj@U3X7174KS don't know what you mean, can you elaborate?#2020-04-0117:10teodorluWhen I first had a look at Datomic, I had the urge to get "nice tables". That eventually got me into the same troubles that I would have if I were to use plain SQL tables: I wan't able to make a "rectangular" structure that fit; what would I do about missing data?
Datomic doesn't require "rectangular" data. Missing data is okay.
When missing data starts becoming okay, you start to think in terms of relations (predicates) first. And those predicates tend to (in my experience) be simpler to describe accurately than the entities.
Picture from the Wikipedia page on RDF, which also "thinks in terms of relations (predicates) first[1]. With SQL, you have to design your entity (subject). With Datomic, you can focus on your relations (predicates) instead.
https://en.wikipedia.org/wiki/Resource_Description_Framework#2020-04-0117:12favila> but couldn’t entity predicates be use as attribute predicates too?
Yes, you can check anything you want in an entity predicate#2020-04-0117:20teodorluI find this topic to be abstract, hard to understand, and hard to explain. So I went looking for a knowledge graph it's possible to explore to illustrate. Didn't find a good one.
What I did find was Tim Berners-Lee arguing for the use of linked data on the web[1]. He implies RDF[2], but Datomic can be used similarly. Sorry for throwing new stuff at you.
In the article, FOAF is an example of a "system for namespacing relations".
[1]: https://www.w3.org/DesignIssues/LinkedData.html
[2]: https://www.w3.org/TR/rdf-sparql-query/#basicpatterns#2020-04-0117:39johnj@U3X7174KS about your wikipedia comment, in datomic, you still have to think about how to group those attributes as entities though#2020-04-0122:33steveb8nIn my schema, I model using rectangular entities so they fit with integrations to relational dbs. attributes for each entity share a namespace (as you suggest) but there is also a single “all” namespace for attributes that are shared by all entities e.g. :all/id which is a uuid, :all/type for dispatch etc.#2020-04-0123:04dfornikaIs anyone aware of successful examples of integrating OWL ontology terms into a Datomic database? Or thought about how it could be done (or why it shouldn't be done)? There seems to be so much conceptual overlap between Datomic and 'semantic web' technologies (RDF, OWL, JSON-LD) but little technical interoperability.#2020-04-0200:27rutledgepaulvThere's this https://github.com/cognitect-labs/onto. Also @U066U8JQJ has done some investigation into this area. https://www.amazon.com/Semantic-Web-Working-Ontologist-Effective/dp/0123859654 is a great book#2020-04-0200:53dfornikaOh thanks @U5RCSJ6BB. I've seen https://github.com/arachne-framework/aristotle and https://github.com/quoll/kiara and I'm sure I've stumbled on onto before but haven't looked at it recently. I was just looking at that 'Working Ontologist' book on amazon earlier today.#2020-04-0201:15rutledgepaulvnp! there's also a #rdf channel#2020-04-0203:45wilkerluciothat book had a great effect on my modeling, I’m super glad @U5RCSJ6BB pointed me this book, great read!#2020-04-0217:52dfornika@U066U8JQJ Have you been able to integrate terms from OWL ontologies somehow as attributes in datomic schemas? Or could you point me to any other resources for clojure/rdf interoperability?#2020-04-0217:58wilkerlucio@U1MBP9HV2 I did play with Jena, got to write some wrappers to work on Jena in similar fashion of datomic, but just as an experiment, I agree they share a good portion of principles (not by accident, datomic is based on RDF ideas), but I didn’t tried to integrate in the rest of the system, if you wanna look at that jena wrapper its here https://github.com/wilkerlucio/jena-clj/blob/master/src/main/com/wsscode/jena_clj/core.clj (disclaimer: I don’t consider it near production ready, just a bunch of random experiments)#2020-04-0218:13dfornikaThanks! At this point I'm trying to just put together a small proof-of-concept so I don't need anything production-ready.#2020-04-0209:11motformI have a quick question about how fulltext search works. I have an es with multiple strings fields, which I have concatenated together and and added to the db under :e/fulltext, all of which works. However, I’m a bit lost on how queries with fulltext work. Lets say I have tokens s1 and s2, I assumed that two calls to fulltext would result in an “and” search, which it seems to be doing. However, when I call fulltext just once with the string "s1 s2" I get a different result, returning a much larger amount of es. I’m guessing its the first behaviour that I want, I got a bit confused by the noticeably large discrepancy in return values (in one example, separate calls returned 12 items, while a concatenated single called returned 300+).#2020-04-0212:23favilaThe string given to fulltext is a lucene query string: https://lucene.apache.org/core/2_9_4/queryparsersyntax.html#2020-04-0212:23favilathe default operator for terms is OR#2020-04-0212:23favila“s1 s2” = “s1 OR s2"#2020-04-0215:34motformThank you so much as always, that makes sense. I was surprised that lucene was not mentioned in the docs. I mean, its kind of an implementation detail, but also kind of not really, as the DSL still works#2020-04-0215:05bmaddyIs there some way to get the datomic version in the repl? I'm thinking something like
d/*datomic-version*
#2020-04-0215:06bmaddyFor context, what I'm actually trying to figure out is why this doesn't work:
(d/q '[:find (pull ?e [[:db/doc :as "doc"]])
:where
[?e :db/ident :db.type/boolean]]
(d/db conn))
Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:57).
:db.error/invalid-attr-spec Cannot interpret as an attribute spec: [:db/doc :as "doc"] of class: class clojure.lang.PersistentVector
and I'm wondering if I have an old datomic version or something.#2020-04-0215:16bmaddyNevermind, I was able to figure out the version another way. My datomic is too old, so that's probably the problem.#2020-04-0315:55kennyHi. We are looking into the best practices for loading lots of data into Datomic Cloud. I have seen the documentation on pipelining transactions for higher throughput https://docs.datomic.com/cloud/best.html#pipeline-transactions. From that section,
> Data imports will run significantly faster if you pipeline transactions using the async API, and maintain several transactions in-flight at the same time.
The example that follows does not use the Datomic async API. Why is that? Should it use the async API to achieve higher throughput?
Are there any additional best practices or things to look out for when loading thousands of entities into Datomic Cloud?#2020-04-0316:15ghadione key for batch imports is to always put retries in your transactions#2020-04-0316:17kennyI assume the typical exponential backoff + jitter on a retriable anomaly?#2020-04-0316:17ghadiyea that works#2020-04-0316:18kennyGive up after 3 retries or more?#2020-04-0316:19ghadican't say without knowing your loads#2020-04-0316:20kennyWhat is the function used to calculate number of retries given a particular datoms/transaction + transactions/second?#2020-04-0316:16ghadi@kenny#2020-04-0316:16ghadiThe example of transaction pipelining in the docs does not include backing off on retriable anomalies#2020-04-0316:18ghadiI always do some back of the napkin estimation of # of transactions and number of datoms per transaction#2020-04-0316:19kennyIs there a recommended number of datomics/transaction?#2020-04-0316:20ghadirough order of 1000-10000 datoms#2020-04-0316:20ghadithis is me talking, not the datomic team#2020-04-0316:21kennyWhy a whole order of magnitude of difference?#2020-04-0409:17John Leidegren@kenny I'm going to guess compression. Some datoms compress better than others, so if you payload is very compressible, you'd get away with putting more datoms in each log segment.
If you have really huge log segments you may run into limitations in storage. For example, the DynamoDB backend has a limit of 400 KiB per value. A log segment larger than that (i.e. transaction) cannot be committed into storage.#2020-04-0316:18ghadigood practice to label the transactions themselves with some metadata#2020-04-0316:21ghadi[:db/add "datomic.tx" :db/doc "kenny did this, part 1/15"]#2020-04-0316:22kennyHmm yeah. In this case I won't know the denominator of your part fraction there. I can still label them though.#2020-04-0316:21ghadior have stronger idempotence markers in the DB metadata#2020-04-0316:23kennyAlso still curious as to if I should be using the Datomic Cloud async api for maximal throughput.#2020-04-0322:21BrianHello! I'm using pull in my Datomic query and want to blend it with an :as statement to change the name of something.
My data looks like this:
{:category {:db/ident abcd}}
I can use pull do grab it with [{:category [:db/ident]}] which returns a structure like the above. I can rename :db/ident by pulling like this [{:category [[:db/ident :as :hello]]}] . This returns a {:category {:hello abcd}}
however what I would love to be able to do would be have it return {:hello abcd} essentially renaming that whole path. I'm doing this within a larger query using pull. I tried pulling this specific part out of the query like this with the :keys option:
(d/q '[:find (pull ?e [:1 :2 :3]) ?ident
:keys :nums :hello
...
[?cat :db/ident ?ident]
...)
but this ends up improperly nesting my return values because I don't want the :nums part and want the hello part to be in the same map as [:1 :2 :3].
Combining all of the above I tried something like
(d/q '[:find (pull ?e [:1 :2 :3
[{:category [:db/ident]} :as :hello]])
...)
However this didn't work and I suspect this isn't possible because pull doesn't know that I'm guaranteed to have a single value at the very end and not a vector somewhere in there. Am I right that it's impossible to tell pull to drill down to that last value and return only that last value under a new specific key? Is it possible to do what I desire some other way?#2020-04-0400:25favilaPull can rename keys or default/limit values, but it cannot transform map shapes. You have to post process#2020-04-0408:57John LeidegrenI'd like some input on general EAV data design. I've started using tuples to create unique identities so that I can enforce a uniqueness constraint over a set of refs within an entity. It looks something like this (tx map):
{:db/id 17592186045416
:list-of-things [{:some-unique-identity [17592186045416 17592186045418]
:some-ref 17592186045418
:meta "foo"}
{:unique-identity [17592186045416 17592186045420]
:some-ref 17592186045420
:meta "bar"}]}
So, what I'm trying to do, is to prevent there from being two or more :some-ref for the same thing in this set of :list-of-things. Is this nuts or does this make sense? I'm worried I just invented some convoluted way of doing something which should be modelled different?
If find tuple identities to be incredibly useful because I get this upsert behavior each time but I don't see how I can avoid the potential data race that would otherwise occur. Any suggestions here, would be much appreciated.#2020-04-0411:48John LeidegrenI think I figured this out.
---
These unique identity tuples are needed because I created entities to group attributes but that's already provided by namespaces. I could just has well let the namespaces encode the topology and let the grouping of things be based on that.
---
These "topological" identities wouldn't be needed if I went for bigger, fatter entities, over excessive container like entities. These intermediate entities that encode a tree like structure are just causing me pain. And I will do away with them.#2020-04-0411:20David PhamDoes Datomic support Java11?#2020-04-0418:05jeff.terrellIs it possible to ssh in to the non-bastion instance in a Solo topology for Datomic Cloud?#2020-04-0418:05jeff.terrellI'm seeing some weird behavior in which a freshly restarted instance seems to get hung whenever I try to deploy (which fails).#2020-04-0418:06jeff.terrellOddly, the CPU utilization, as visible on the CloudWatch dashboard, jump to 45% and stays there after I try a deploy, whereas before it's near 0%.#2020-04-0418:07jeff.terrellOnce I start the deploy, neither an existing socks proxy nor a new one allows connections through to Datomic, whereas before the deploy it works fine.#2020-04-0418:08jeff.terrellI can datomic solo reset to get back to a working state…but if I try to deploy, I get back into the hung state.#2020-04-0418:08jeff.terrellI'd like to ssh in to see what process is using so much CPU.#2020-04-0418:10jeff.terrellI'm fairly perplexed by all of this. It's on a fresh AWS account and Datomic Cloud instance, and I've had success with Ions and the Solo topology before…#2020-04-0418:12jeff.terrellOne more clue: the CodeDeploy deployment fails on the final ValidateService event. The script output has about 100 lines of [stdout]Received 000. I think this means that the Datomic health check request is failing to get a response at all, let alone a 200.#2020-04-0418:24ghadi@jeff.terrell sometimes I jump into the Datomic nodes to do a jstack dump.#2020-04-0418:25ghadiBy default, the bastion cannot SSH to the nodes because of a security group#2020-04-0418:25ghadithere is a security group that is called datomic-something-nodes#2020-04-0418:25ghadiyou need to modify that SG to allow the bastion instance in on port 22#2020-04-0418:25jeff.terrellAh, security groups, right! I assumed that the compute node would already be configured to accept traffic from the bastion. But yeah, maybe not on port 22, right. Thanks!#2020-04-0418:26ghadiProtip: you can add an entry to that security group, referring to the bastions security group symbolically#2020-04-0418:27ghadiinstead of hardcoding an IP address or CIDR block#2020-04-0418:27ghadithen, you need the ssh keypair for the nodes, which you had when you created the stack#2020-04-0418:28ghadiso what I do is I add the bastion's key from ~/.ssh/datomic-whatever to my ssh agent:#2020-04-0418:28ghadissh-add ~/.ssh/datomic-whatever#2020-04-0418:28ghadithen add the node keypair:
ssh-add ~/wherever/nodekey#2020-04-0418:29ghadithen I ssh to the bastion with -A, which forwards the local ssh-agent#2020-04-0418:29ghadithen from there you can ssh to the node in question#2020-04-0418:29ghadisudo su datomic to become the datomic user#2020-04-0418:29jeff.terrellAh, fantastic tips, thanks! I would have been stumbling around trying to scp the appropriate private keys over to the bastion.#2020-04-0418:29ghadiso that you can run jstack on the pid, or poke around#2020-04-0418:30ghadiMy pleasure. Whatever you do end up finding, see if there is some signal of it in CloudWatch Logs or CodeDeploy or wherever#2020-04-0418:31ghadiif there's not, maybe worth a support ticket?#2020-04-0418:32jeff.terrellOK. I haven't seen any clue in those places yet. I'll be sure to follow up as needed to be sure others don't run into this.#2020-04-0418:52jeff.terrellWhen I got into the system, I learned that the CPU utilization was because of bin/run-with-restart being called to start some Datomic-related process over and over, which was failing every time. When I ran the command manually, it tells me:#2020-04-0418:52jeff.terrell> com.amazonaws.services.logs.model.ResourceNotFoundException: The specified log group does not exist. (Service: AWSLogs; Status Code: 400; Error Code: ResourceNotFoundException; Request ID: c58f56ce-3b7f-4be1-bbd3-463a950018c7)#2020-04-0418:52jeff.terrell…followed by a stack trace, and an anomaly map:#2020-04-0418:53jeff.terrell:datomic.cloud.cluster-node/-main #error {
:cause "Unable to ensure system keyfiles"
:data {:result [#:cognitect.anomalies{:category :cognitect.anomalies/incorrect} #:cognitect.anomalies{:category :cognitect.anomalies/incorrect}], :arg {:system "pp-app", :prefix "s3://"}}
:via
[{:type clojure.lang.ExceptionInfo
:message "Unable to ensure system keyfiles"
:data {:result [#:cognitect.anomalies{:category :cognitect.anomalies/incorrect} #:cognitect.anomalies{:category :cognitect.anomalies/incorrect}], :arg {:system "pp-app", :prefix "s3://"}}
:at [datomic.cloud.cluster_node$ensure_system_keyfiles_BANG_ invokeStatic "cluster_node.clj" 336]}]
:trace ,,,}#2020-04-0418:54jeff.terrellI'm thinking this is not because of something I did wrong (though I'm happy to be corrected on that point). Opening a support ticket…#2020-04-0504:59Braden ShepherdsonIs the embedded storage in Datomic Free sufficient for a small website I'm building for a hobby group? The old one it's replacing is a single VPS with PHP scripts and MySQL, but there are perhaps 130,000 rows in the database, maybe a million datoms. The transaction rate is modest though, maybe a couple hundred an hour at most. There's just no context for what it's capable of.#2020-04-0506:29John LeidegrenSo, it's based on the H2 Database Engine and noting in Datomic is inherently slow.
The way it uses the database is that it pulls segments of data from the database, so you don't have a datom to row mapping or anything like that. You basically use the database as a key value store.
130,000 rows doesn't sound like a lot and a couple of hundred transactions an hour also seem small.
I haven't done this myself but I don't think you will have any problems.#2020-04-0512:11Braden ShepherdsonOkay, I'll give it a try. I know it's a small database, but I was worried because it's also the "dev" version. I could imagine a very naive dev version that just writes chunks of EDN to files and can only handle a free thousand datoms, or has to fully read into memory, or similar.#2020-04-0512:12Braden ShepherdsonBut it sounds like it's the other way around, and the dev version uses the capable though modest free version's embedded storage.#2020-04-0514:47John LeidegrenYes, I know for a fact that it does! The storage layer is just insert or update of KV pairs that contains chunks of datoms for either log och index data.#2020-04-0514:49John LeidegrenDepending on the I/O subsystem of the box that is running the thing, it should be plenty capable. I have no direct experience with H2 but it probably performs well enough and it's not asked to do anything other than shuffling bytes back and forth. So, it's a quite ideal situation.#2020-04-0517:30favilaThe real perf limitation is that the dev transactor process is also the storage process (it opens another port to serve peers’ sql queries)#2020-04-0517:31favilaI don’t know how well optimized that server code is#2020-04-0517:32favilaBut I have used shared dev for light duty just fine; also as the target of large bulk import jobs just fine#2020-04-0521:18hadilsDatomic Cloud Question: Is there a "hook" for starting up processes when the Ions are started, e.g., after a deploy? I want to use Quartzite and don't know the best way to initialize it...Thanks in advance.#2020-04-0600:38denikIs there a way to have optional inputs in datomic?? In my case, I'd like the where clauses that include undefined inputs to be ignored:
(d/q
'[:find ?e
:in $ ?tag ?more ?other
:where
[?e :ent/foo ?foo]
[?e :ent/tags ?tag]
[?e :ent/more ?more]
[?e :ent/other ?other]]
(d/db conn)
:tag
; skip :more
:other
)
Here the value for ?more should not be passed but because the value for ?other follows, ?more is interpreted as ?other. Passing nil has not worked for me. It seems that it is then used as the value to match in the query engine. Do I need to write a different query for each optional input?#2020-04-0600:38denikIs there a way to have optional inputs in datomic?? In my case, I'd like the where clauses that include undefined inputs to be ignored:
(d/q
'[:find ?e
:in $ ?tag ?more ?other
:where
[?e :ent/foo ?foo]
[?e :ent/tags ?tag]
[?e :ent/more ?more]
[?e :ent/other ?other]]
(d/db conn)
:tag
; skip :more
:other
)
Here the value for ?more should not be passed but because the value for ?other follows, ?more is interpreted as ?other. Passing nil has not worked for me. It seems that it is then used as the value to match in the query engine. Do I need to write a different query for each optional input?#2020-04-0600:55favilaGenerally yes: you can also use a sentinel value to indicate “don’t use” and an or or rule#2020-04-0601:07denikthanks @favila I'm running into some issues doing so. For example, or demands a join variable. Could you point me at an example or some pseudocode?#2020-04-0601:09denikThis also breaks down when using inputs as arguments to functions, i.e. > . Now my sentinel value has to be a number which which would backfire or otherwise the query engine will throw an expression.#2020-04-0601:11favilaI maybe misunderstand your use cause. Your example query doesn’t make sense to me: how do you have a query which has no ?more clauses nor is called with that arg as input but still has it in the :in? How did you get in this situation? What I thought you were talking about is a scenario like this:#2020-04-0601:12denik> I'd like the where clauses that include undefined inputs to be ignored#2020-04-0601:13denikBut I'm also fine wrapping those values in or-like clauses. However, that hasn't been working well due to numerous edge-cases#2020-04-0601:18favila(q [:find ?e
:in $ ?id ?opt-filter
:where
[?e :id ?id]
(or-join [?e ?opt-filter]
(And
[(ground ::ignore) ?opt-filter]
[?e])
(And
(Not [(ground ::ignore) ?opt-filter])
[?e :attr ?opt-filter]))]
Db 123 ::ignore)
#2020-04-0601:18favilaPls excuse everything, I’m on a phone#2020-04-0601:20denikJust retyped this for clarity
(d/q '[:find ?e
:in $ ?id ?opt-filter
:where
[?e :id ?id]
(or-join [?e ?opt-filter]
(and
[(ground ::ignore) ?opt-filter]
[?e])
(and
(not [(ground ::ignore) ?opt-filter])
[?e :attr ?opt-filter]))]
(d/db conn) 123 ::ignore)#2020-04-0601:20favilaYour example query doesn’t have clauses using undefined input, it has a slot for an input that you don’t fill
#2020-04-0601:20favilaThat’s what confuses me#2020-04-0601:20denikright, because hash-map-like destructuring is not supported in :in#2020-04-0601:21favila?#2020-04-0601:22favilaWhere did ?more come from? Was this query built with code?#2020-04-0601:23denikNo, rather the query is supposed to stay as it is but inputs should be nullable / or possible to be disabled#2020-04-1616:47favilayou’re not familiar with crux?#2020-04-1616:47Urijuxt-crux#2020-04-1616:48Urihttps://github.com/juxt/crux#2020-04-1616:48favilayes I’m aware of it#2020-04-1616:48Uriahh#2020-04-1616:48Urinow i understand 🙂#2020-04-1616:48favilawhat do you mean by “not familiar with this”?#2020-04-1616:48Urimissing a comma there#2020-04-1616:48Urii thought you meant mongodb-crux#2020-04-1616:48favilaoh#2020-04-1616:49Urinm i understand now#2020-04-1616:49Uriyeah that's what seems to me too#2020-04-1616:49Urijust wanted to make sure because i'm completely a newbie to graph dbs#2020-04-1616:49favilayeah, so, if all you are doing is dumping docs from mongo into another db that has better querying, crux may be a better fit#2020-04-1616:49Uricool#2020-04-1616:49favilait’s less work because you don’t have to translate the docs into a graph#2020-04-1616:50favilaand you don’t have to have a plan for inconsistent data#2020-04-1616:50favilaand you can use crux’s “valid time” to model your “batch number” concept#2020-04-1616:50Uriso what they said in the crux channel is that i can actually remove or invalidate a transaction#2020-04-1616:50Uriwhich is arbitrarily big json essentially iiuc#2020-04-1616:51Uri(my graph)#2020-04-1616:51favilacrux has bi-temporality vs datomic, but it gives up being a referentially-consistent graph and has a larger unit of truth (the document)#2020-04-1616:52Urihmm interesting, what does "referentially-consistent" mean?#2020-04-1616:52favilacrux doesn’t have references, in your json example, you need to manually know how to make “children” values join to something else#2020-04-1616:53faviladatomic has a ref type, the thing on the other end is an entity#2020-04-1616:53faviladatomic can also add/retract individual datoms: crux can only add a new document#2020-04-1616:54favilaIMO datomic is better as your “source of truth” primary db, and crux is better for dealing with “other people’s” messy data which you may not understand or have a full schema for#2020-04-1616:54Urii will ask them about it - sounds important
i mean, i do want to be able to identify between entities across my json objects#2020-04-1616:55Uriit's not so much other people's data, but more like a scrape of their data that i make, so i'm in control of everything ingested#2020-04-1616:55favilayou can with datalog, but it’s by value (a soft-reference) not an actual reference#2020-04-1616:55favilaI mean, that’s all mongo is doing#2020-04-1616:55Uriso in crux everything is a string/int/date etc'? there's no ref?#2020-04-1616:55favilamongo doesn’t have refs either right?#2020-04-1616:55Uriright#2020-04-1616:55Urimongo/json#2020-04-1616:58Uribut in crux if i load an object, which is essentially a lot of triplets, does crux automatically assign an id to symbolize the entity?#2020-04-1616:58Urior i have to manage it myself somehow#2020-04-1616:59Uriby having an id field in the json objects i load in?#2020-04-1617:00Uri{id: '123', age: 40} then i add {name: "joe", id: '123'}
so i can say in datalog "get me the age and name of things that have id=123"#2020-04-1617:01favilayou need to assign an id to a document when you create the object#2020-04-1617:02favilaif you have refs to something other than documents, you have to figure something out yourself#2020-04-1617:02favilacrux will injest any EDN and decompose it into triples for query purposes, so you can still do arbitrary joins#2020-04-1617:02favilabut it doesn’t know the meaning of those attributes so it can’t enforce anything#2020-04-1617:02favilain fact, it doesn’t even enforce types#2020-04-1617:03Uriso everything is values, and only the document (i.e. what i load in) have a reference#2020-04-1617:03favilacorrect#2020-04-1617:03Urigot it - wow that's good to know#2020-04-1617:03favilahttps://opencrux.com/docs#transactions-valid-ids#2020-04-1617:03favilacrux has four transaction operations#2020-04-1617:04favila:crux.db/id is magic#2020-04-1617:04favilait’s required by every document#2020-04-1617:04favilaand there’s some limit to the kinds of values it can have#2020-04-1617:05favilahonestly this property, though scary for a primary data store, is absolutely freeing for data ingestion#2020-04-1617:05favilaI don’t need to write a complex ETL pipeline before I can use other people’s document-shaped data (and most of it is document-shaped)#2020-04-1617:06favilaI can figure out the joins later; I can retain broken data, etc#2020-04-1617:06favilabut I can always faithfully retain what they said, and transform/normalize/clean-up before moving into a primary datastore that isn’t so sloppy#2020-04-1617:07Uriin some sense this is something i was missing in datomic - the "who"#2020-04-1617:07Uriwho knows what#2020-04-1617:07Urikind of a theory of mind layer over the db#2020-04-1617:07favilayou can kind of do this by using transaction metadata, but you are subject to the limitations on transactions#2020-04-1617:08faviladatomic is built with a close-world assumption--it is the source of truth#2020-04-1617:08favilaother systems like rdf (which datomic is heavily inspired by) have open world assumptions and need complicated reification schemes to use datoms themselves as the subject or object of a predicate#2020-04-1617:09favilacrux takes a different approach by just letting you join on anything you want and working hard to make it fast#2020-04-1617:10favilaI think it’s best suited to cases where the provenance of the data you put into it is not yourself#2020-04-1617:10Uriideally i would just want to treat transactions as entities themselves and associating them (e.g. with a batch #)#2020-04-1617:11Uribecause the crux way is also limiting in some sense#2020-04-1617:12Uriand do datalog queries on a subset of transactions#2020-04-1617:12favilasure, but think through what the loading code would look like for crux vs datomic here#2020-04-1617:12Urifor my current problem - i agree it sounds like i have to compromise#2020-04-1617:13favilaalso, you can’t have a single tx for a batch in datomic--that tx is too big#2020-04-1617:13favilayou should aim for ~1000 datoms per tx#2020-04-1617:13favilayou can go over, it’s fine, but you shouldn’t have tens of thousands of datoms in a tx#2020-04-1617:14Uriah so i meant - treat datoms as entities and do queries on a subset of datoms*#2020-04-1617:14Urilike time travel lets you do it over the time axis#2020-04-1617:14favilaoh, so the “batch-7-joe” solution?#2020-04-1617:14Urii think computationally this would be intractable to do generally#2020-04-1617:15favilayou can do this with tuple refs, if each entity has a batch attribute and whatever their native id attribute is#2020-04-1617:15favilabut you have the same problem of needing to ingest the data in a topological-ish order so your refs work#2020-04-1617:17Urii'm thinking of something maybe simpler - imagine that each datom (not tx) had its own id - i think it's the instant today (?)
then i could say datoms 1, 2 and 7 belong to batch #8, and i would like a higher order datalog query that first chosses a subset of datoms, then runs the internal query#2020-04-1617:17Urii mean - again computationally i don't see how you could do that generally, but if you had infinite cpu#2020-04-1617:21favilayou can do it with indexing#2020-04-1617:22favila{:entity/batch 7 :entity/id "foo" :entity/batch-id [7 "foo"]} where :entity/batch-id is a tuple attr#2020-04-1617:23favilayou only have to start your query from there; the refs outward should all be references to batch-7 entities anyway#2020-04-1617:24favilathis is in the “all batches available simultaneously” approach#2020-04-1617:25favilain the “transact deltas” approach, you can put the batch onto the tx metadata; then as-of time travel accomplishes the same thing#2020-04-1617:25favila(assuming you didn’t make a mistake with your deltas)#2020-04-1710:42Uriwhat if i use this:
> You can add additional attributes to a transaction entity to capture other useful information, such as the purpose of the transaction, the application that executed it, the provenance of the data it added, or the user who caused it to execute, or any other information that might be useful for auditing purposes.
and my batch is many transactions all labled,
then use this to retract the previous batch:
https://stackoverflow.com/a/25389808/378594
then when I want to query on a certain batch, I use the point in time where it was inserted (that's actually the semantics I want - the state of the database beyond my batch at a certain point)
would that work?#2020-04-1712:42favilaWill each batch consist only of new entities?#2020-04-1712:43favilaBatch-6 vs batch-7 joe?#2020-04-1712:44favilaIf so, this is the same as our each-batch-available-simultaneously scenario discussed earlier, but with the additional unnecessary deletion step#2020-04-1712:46favilaIf instead joe is the same entity across batches: when you retract old batches, are you carefully not retracting datoms which are still valid? If so, you aren’t deleting previous batches but transacting the delta between the current db and latest batch.#2020-04-1712:47favilaIf you are deleting everything from a batch, this is both not what you want and unnecessary, as you are just replicating the d/since feature#2020-04-1712:48favilaMaybe what you are missing is that “reasserting” a datom with a new batch doesn’t add new datoms—the previous datom is kept (it’s still valid!) so it will always have the tx of the first batch where it became true, not the last batch#2020-04-1805:43onetomThis was a very interesting conversation!
I'm also ingesting data regularly from a MySQL database and face similar problems you discussed.
However, is it necessary to persist many earlier batches?
Do the batches reference any other data, which doesn't change over time?
I'm asking because maybe you don't want to put your batches into the same DB.
You can create a new DB for every day maybe.
Alternatively, you can also just keep some of the daily snapshots in memory and instead of persisting them with d/transact, you can use d/with to virtually combine your batch-of-the-day onto the rest of the data in some kind of base Datomic DB.
what do you think, @U011WV5VD0V?#2020-04-1809:35Urivery interesting.
first of all @U09R86PA4 I see your point. I really do want to keep the same entity id. If my newly added edges never intersect with my base db then retracting everything would work, but this is dangerous and might not be true at some point in the future.
@U086D6TBN yes it would be preferable to keep this foreign info / copy in a separate place and compose the base db (at a certain instsance) and a version of the foreign db ad hoc. In memory would work today but is not future proof (near-future...).
This is a bit like namespacing I think, but with composition. So I guess these feature don't exist yet?#2020-04-1809:38onetomHow long would you need to keep older days snapshots?
Based on how you described "invalidation" it sounded like you wouldn't need to access yesterday's import even today anymore.#2020-04-1809:39onetomAlso, how big is your dataset, and how long does it take to import it?#2020-04-1809:42onetomI'm working with ~4million entities, each with 2-4 attributes only. That takes me around 5mins to import on an 8core i9, with 80GB RAM. Not sure which of my java processes my app or my transactor, but none of them consume more than 16GB RAM#2020-04-1809:44onetomAlso, I'm directly querying my data from MySQL with jdbc.next fully into memory and then transact it from there#2020-04-1809:46onetomI found that json parsing can have a quite serious performance impact, so it's better if u cut that step out of your data processing pipeline#2020-04-1810:21Urithe only reason to return to older snapshots is for debugging and analytics purposes#2020-04-1810:21Uriso it does happen sometime#2020-04-1810:21Urias for size - it's not nearly as big, I'd say 100k entities#2020-04-1810:22Uri(might be bigger in the future)#2020-04-1810:22Uri(probably)#2020-04-1810:25Uri(I'm not working with clojure so would need another component to handle this ad hoc transacting)#2020-04-1814:40favila
> I really do want to keep the same entity id. If my newly added edges never intersect with my base db then retracting everything would work, but this is dangerous and might not be true at some point in the future.
@U011WV5VD0V No, it’s guaranteed not to work because it’s not just edges, it’s every datom. Eg batch 1 transacts [entity :doc-id “joe”] (an identifier not a ref/edge). Batch 2 attempts to transact the same—but since that fact already exists (by definition—it is an identifier) datomic does not add the datom and the tx of [entity :doc-id “joe”] is still a batch 1 tx. If you then delete all batch 1 datoms, you have removed the “joe” doc identifier. The only thing left in the db is whatever datoms were first asserted by batch 2#2020-04-1814:43favila> I’m not working with clojure
Really? What are you using?#2020-04-1814:45favilaAdding a new db per day is not a bad idea#2020-04-1816:12Uripython and javascript#2020-04-1817:13favilaSo how are you interfacing with datomic? Graalvm?#2020-04-1823:44Urii'm not (yet)
i need a graph database with some versioning features and evaluating different solutions#2020-04-1900:31favilaDatomic without a jvm is going to be a bad time#2020-04-1612:40vlaaad(d/q '[:find ?k ?v
:in $ ?q
:where
[(.getClass ?q) ?c]
[(.getClassLoader ?c) ?cl]
[(.loadClass ?cl "java.lang.System") ?sys]
[(.getDeclaredMethod ?sys "getProperties" nil) ?prop]
[(.invoke ?prop nil nil) [[?k ?v]]]]
db {})#2020-04-1612:41vlaaadfun stuff with interop on datomic cloud ^#2020-04-1612:44vlaaaddidn’t expect query to provide full jvm access though..#2020-04-1613:40vlaaadOr just read-string with read-eval:
(d/q '[:find ?v
:in $ ?form
:where
[(read-string ?form) ?v]]
db "#=(java.lang.System/getProperties)")#2020-04-1613:41Joe LaneCome on now Vlad, what's the first rule of hash-equals club!?#2020-04-1613:42vlaaadYeah, right 😄#2020-04-1613:42vlaaadit’s just absense of eval gives a false sense of security#2020-04-1613:58Ben HammondI see the error
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
Only find-rel elements are allowed in client find-spec, see
when attempting to query a scalar value like
(client/q {:query
'[:find ?uid .
:in $ ?eid
:where
[?eid :user/uuid ?uid]]
:args [(client/db datomic-user-conn)
17592186045418]})
is there a way to query datomic client for single values?
is this just fundamentally not possible?#2020-04-1613:59marshall@ben.hammond (ffirst …#2020-04-1613:59marshallresult ‘shape’ specifications in the :find clause do not affect the work done by the query#2020-04-1613:59marshallin client or in peer#2020-04-1614:00Ben Hammondyeah so I take that as a not possible#2020-04-1614:00Ben Hammondthanks#2020-04-1614:00marshallthey only define what is returned to you#2020-04-1614:00Ben HammondI guess I could reduce the chunksize to 1#2020-04-1614:00marshallin your case, you have a single clause#2020-04-1614:00Ben Hammondbut I don't think I care all that much#2020-04-1614:00marshallyou could just use a datoms lookup directly#2020-04-1614:01marshallor better even still, in that example you already have an entity ID#2020-04-1614:01marshallyou should use pull#2020-04-1614:01Ben Hammondhttps://docs.datomic.com/on-prem/best-practices.html#prefer-query
?#2020-04-1614:02Ben Hammondthis advice is marked as 'on-prem', but I presume is equally valid for cloud?#2020-04-1614:08favilaprefer query vs datoms or d/pull|d/entity + manual join and filtering#2020-04-1614:09favilai.e. query for “where” work, not “find” work#2020-04-1614:01marshall(d/pull (d/db conn) '[:user/uuid] eid)#2020-04-1614:02Ben Hammondoh I like the look of that#2020-04-1614:09marshallonprem or cloud?#2020-04-1614:09marshallSee: https://docs.datomic.com/cloud/best.html#use-pull-to-retrieve-attribute-values#2020-04-1614:10marshall“You should use the `:where` clauses to identify entites of interest, combined with a `pull` expression to navigate to attribute values for those entities. An example:”#2020-04-1614:10marshallso if you already have your entity identifier, use pull#2020-04-1614:57Ben Hammondthankyou#2020-04-1614:26Drew VerleeI never noticed this before but it seems like their isn't parity between the find specs between cloud and on-prem
cloud: https://docs.datomic.com/cloud/query/query-data-reference.html#find-specs
on-prem: https://docs.datomic.com/on-prem/query.html
Does anything highlight other api differences?#2020-04-1614:28marshall@drewverlee https://docs.datomic.com/on-prem/clients-and-peers.html#2020-04-1614:33Drew VerleeThanks. ill have a look.#2020-04-1714:07kennyWe were running an import of data into a Datomic Cloud solo instance and it appears to have crashed. CPU is stuck at 0%. All calls to d/connect results in a Connect Timeout. Is there no health check that can detect and cycle the process/vm in a case like this?#2020-04-1714:13kennyLast log line
{"Gcname":"G1 Old Generation","Gcaction":"end of major GC","Gccause":"Allocation Failure","Msg":"GcEvent","Duration":5584,"Type":"Event","Tid":8,"Timestamp":1587119656023}#2020-04-1714:14kenny#2020-04-1714:38kennyI opened a Datomic support request since this is probably too specific.#2020-04-1816:19joshkhi'm interested as well. recently i discovered that one node in my query group had been completely wedged from a bad query, and i only discovered it hours later when a routine code deployment failed due to inefficient memory.#2020-04-1714:34Ben Hammondwhen I retrieve a db.type/uri datom from datomic cloud using pull it comes back with an unexpected class com.cognitect.transit.impl.URIImpl`.
I was sort of hoping for a .URI
I know I can manually convert it into a https://java.net.URI using str , my question is whether this is expected behaviour, or whether I have something misconfigured
or should I be de-transiting the response from pull#2020-04-1805:51onetomim also using :db.type/uri attrs, but only thru on-prem peers.
i would definitely expect it to work on the cloud version too, out of the box.
so, my guess is that it's a bug.#2020-04-1821:29Drew VerleeI tried running something very similar to this example:
;; query
[:find [?name ...]
:in $ ?artist
:where [?release :release/name ?name]
[?release :release/artists ?artist]]
and results in a error:
Only find-rel elements are allowed in client find-spec, see
{:cognitect.anomalies/category :cognitect.anomalies/incorrect,
:cognitect.anomalies/message
"Only find-rel elements are allowed in client find-spec, see ",
Which is confusing to me because thats not what the grammer implies to me
find-spec = ':find' (find-rel | find-coll | find-tuple | find-scalar)
find-rel = find-elem+
find-coll = [find-elem '...']
#2020-04-1821:31Drew Verleeok, i think its linking to the wrong docs. This grammar, specific to the cloud, does say that: https://docs.datomic.com/cloud/query/query-data-reference.html#2020-04-1821:35Drew VerleeThis is double confusing because the example is from the offical docs: https://docs.datomic.com/cloud/query/query-executing.html#2020-04-1821:43marshallClient does not permit find specifications other than relation.#2020-04-1821:44marshallIll fix the example#2020-04-1822:27Drew VerleeThanks!#2020-04-2016:10joshkhis it normal to see what appears to be 1 consistent OpsPending in the Datomic CloudWatch dashboard spanning the course of days?#2020-04-2112:39tatutanyone have datomic cloud db using tests running on github actions? my build can't find the ion jars, IIRC there was some region restrictions in accessing the s3 release bucket#2020-04-2114:28pvillegas12Is there a way to query for all datoms affected by a transaction? I can find datoms that are affected from a given transaction which are associated with a particular entity like this:
(d/q '[:find ?attr ?value ?txid
:in $ ?txid ?entity
:where
[?entity ?attr ?value ?txid]
]
(d/history (d/db (cloud-conn))) 13194140275534 69102106782505590)
#2020-04-2114:28pvillegas12If ?entity is not bound, I get a Insufficient binding of db clause: [?s ?attr ?value ?txid] would cause full scan#2020-04-2114:29marshalltx-range#2020-04-2114:29marshall@pvillegas12 ^#2020-04-2114:29marshallhttps://docs.datomic.com/client-api/datomic.client.api.html#var-tx-range#2020-04-2114:30marshallsee https://github.com/cognitect-labs/day-of-datomic-cloud/blob/master/tutorial/log.clj#L50 for an example#2020-04-2114:32pvillegas12@marshall Thank you, that’s exactly what I needed 😄 😄 😄#2020-04-2118:21armedHello everyone.
(first (d/query
{:query '{:find [(boolean ?u) ?passwords-equals ?active]
:keys [login-correct?
password-correct?
user/active?]
:in [$ ?login ?password]
:where [[?u :user/login ?login]
[?u :user/password ?pwd]
[?u :user/active? ?active]
[(= ?password ?pwd) ?passwords-equals]]}
:args [src-db login password]}))#2020-04-2118:22armedWhy this code does not work (empty)? But It returns data when I replace last clause with [(.equals ?password ?pwd) ?passwords-equals]]#2020-04-2118:22armedWhy = is not treated like function expression?#2020-04-2118:30favilaMy guess is it’s a special form for performance. It’s already not the standard clojure.core/= function.#2020-04-2118:30favila!= is another one#2020-04-2118:32armedThanks. Official docs doesn't mention this.#2020-04-2118:33armedBTW, = works as expected if ?password equals to ?pwd, but fails when otherwise.#2020-04-2118:35armed(clojure.core/= ?password ?pwd) works as expected#2020-04-2118:56ghadihttps://docs.datomic.com/cloud/query/query-data-reference.html#range-predicates#2020-04-2207:38onetomI have an 8core iMac with 80GB RAM.
Trying to import bigger amounts of data on it into an on-prem datomic dev storage.
I see very little CPU utilization (~10-20%)
What can I do to make a better use of the machine?
I'm already doing this:
Launching with Java options -server -Xms4g -Xmx16g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
and in my properties file for the txor, this:
## Guessed settings for -Xmx16g production usage.
memory-index-threshold=256m
memory-index-max=4g
object-cache-max=8g
#2020-04-2207:41onetomI'm also not deref-ing the d/transact calls.
I saw on the https://docs.datomic.com/on-prem/capacity.html#data-imports page,
that I should use the async API and do some pipelining, but not sure how.
Is there any example of such pipelining somewhere?
Am I hitting some limitation of the H2 store somehow?#2020-04-2207:41onetomi checked one import example:
https://github.com/Datomic/codeq/blob/master/src/datomic/codeq/core.clj#L466
but this doesn't use the async api, it's just not dereffing the d/transact call...#2020-04-2207:49onetomi'm trying with d/transact-async now and the utilization is slightly better, but then im not sure how to determine when has the import completed.#2020-04-2209:37favilaYou get max utilization with pipelining plus back pressure. You achieve pipelining by using transact-async, leaving a bounded number in-flight (not dereffed) and backpressure by dereffing in order of submissions.#2020-04-2209:39favilahttps://docs.datomic.com/cloud/best.html#pipeline-transactions explains and links to examples#2020-04-2209:41favilaBe warned that the impl they show there assumes no interdependence between transactions (core.async pipeline-blocking executes its parallel work in no particular order, but results are in the same order as input)#2020-04-2210:27onetomah, i see! the on-prem docs also has that page:
https://docs.datomic.com/on-prem/best-practices.html#pipeline-transactions
thanks, @favila!#2020-04-2213:12ghadithe examples there don't retry either#2020-04-2213:43Joe LaneLook here for an project to study which includes retry and backpressure. https://github.com/Datomic/mbrainz-importer#2020-04-2214:18defaI’m having a problem creating a database when running the datomic transactor in a docker container. I created the docker container as desribed https://hub.docker.com/r/pointslope/datomic-pro-starter/. Since I’d like to also run a peer server and a datomic-console dockerized, I configured the transactor with storage-access=remote and set storage-datomic-password=a-secret. The docker container exposes ports 4334-4336.
When connecting from the host via repl to the transactor (docker) I get an error:
Clojure 1.10.1-pro-0.9.6045 defa$ ./bin/repl-jline
user=> (require '[datomic.api :as d])
nil
user=> (d/create-database "datomic:")
Execution error (ActiveMQNotConnectedException) at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl/createSessionFactory (ServerLocatorImpl.java:787).
AMQ119007: Cannot connect to server(s). Tried with all available servers.
What does this error mean? With the wrong password I get:
Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:79).
:db.error/read-transactor-location-failed Could not read transactor location from storage
#2020-04-2214:33favilathe datomic transactor’s properties file needs a host= or alt-host= that has a name that other docker containers can resolve to the storage container#2020-04-2214:34favila(in the dev storage case, the storage and transactor happen to be the same process, but that this is the general principle)#2020-04-2214:34favilaso connecting to “localhost” connects to the peer container localhost, which is not correct#2020-04-2214:36faviladatomic connection works like: 1) transactor writes its hostname into storage 2) d/connect on a peer connects to storage, retrieves transactor hostname 3) peer connects to transactor hostname#2020-04-2214:36favilayou appear to be failing at step 3 in your first error, step 2 in your second error#2020-04-2214:47defa@favila not sure if I understand correctly… I changed host=localhost to host=datomic-transactor and log now says:
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:<DB-NAME>, storing data in: data ...
System started datomic:<DB-NAME>, storing data in: data
#2020-04-2214:48favilafrom where you peer is, can you resolve “datomic-transactor” ?#2020-04-2214:49defaSince I’m connecting from the docker host, I altered /etc/hosts to map datomic-transactor for 127.0.01 (localhost) … same problem when connecting to `
datomic:
…#2020-04-2214:50defaI will try from my docker peer server but thought that i hat to create a database first (before launching the peer)#2020-04-2214:50favilatry nc -zv datomic-transactor 4334 from a terminal running in the same context as your peer#2020-04-2214:52defa$ nc -zv datomic-transactor 4334
found 0 associations
found 1 connections:
1: flags=82<CONNECTED,PREFERRED>
outif lo0
src 127.0.0.1 port 52204
dst 127.0.0.1 port 4334
rank info not available
TCP aux info available
Connection to datomic-transactor port 4334 [tcp/*] succeeded!#2020-04-2214:55defaJust to see if I understand peer-servers corretly… can I start a peer-server without (d/create-database <URI>) first? Because I get:
Execution error at datomic.peer/get-connection$fn (peer.clj:661).
Could not find my-db in catalog
Full report at:
/tmp/clojure-3528411252793798518.edn
where my-db has not been created before.#2020-04-2214:56favilano, you need the db first#2020-04-2214:57favilanow try nc -zv datomic-transactor 4335#2020-04-2214:57favila(4335 is storage)#2020-04-2214:58defa$ nc -zv datomic-transactor 4335
found 0 associations
found 1 connections:
1: flags=82<CONNECTED,PREFERRED>
outif lo0
src 127.0.0.1 port 52844
dst 127.0.0.1 port 4335
rank info not available
TCP aux info available
Connection to datomic-transactor port 4335 [tcp/*] succeeded!#2020-04-2214:58favilaif both of these work, your bin/repl-jline should succeed if you run it from the same terminal#2020-04-2214:58favila(specifically the create-database you were trying before)#2020-04-2215:02defaNow it does… just wondering why it didn’t before. But I tried with a new repl…#2020-04-2215:03defaeven works with localhost in the uri…#2020-04-2215:03favilawhat was in your host= before?#2020-04-2215:04defahost=localhost…#2020-04-2215:05favilaso that means the transactor bound to the docker container’s localhost, 127.0.0.1; probably not the same as the peer’s?#2020-04-2215:06favila(i’m fuzzy on docker networking)#2020-04-2215:06defaNot sure but it does work now. Thank you very much @favila for your quick response and fruitful help!#2020-04-2215:07defaI’m fairly new to docker and datomic but your explanations do make perfect sense!#2020-04-2215:07favilaI usually see and use host=0.0.0.0 alt-host=something-resolveable so I don’t have to worry about how the host= resolves on both transactor and peer#2020-04-2215:08defaOkay, will try this as well. Thanks again!#2020-04-2215:08favilathe transactor will use host= for binding, and advertise both for connecting#2020-04-2215:08favilaand the peers will end up using alt-host#2020-04-2216:57kennyI'm trying to query out datom changes between a start and end date under a cardinality many attribute by doing this:
'[:find ?date ?tx ?w ?attr ?v ?op
:keys date tx db/id attr v op
:in $ ?container ?start ?stop
:where
[?container :my-ref-many ?w]
[?w ?a ?v ?tx ?op]
[?a :db/ident ?attr]
[?tx :db/txInstant ?date]
[(.before ^Date ?date ?stop)]
[(.after ^Date ?date ?start)]]
The query always times out. I assume it must be doing something very inefficient (e.g., full db scan). Is there a more efficient way to get this sort of data out?#2020-04-2217:21marshalluse a since-db#2020-04-2217:21marshallhttps://github.com/cognitect-labs/day-of-datomic-cloud/blob/master/tutorial/filters.repl#L102#2020-04-2217:22marshallyou could use a since and an as-of db to get an ‘in-between’#2020-04-2217:23kennyOooo, ok! I'll try that.#2020-04-2217:38kennyI'm struggling figuring out how I'm supposed to join across these dbs. I'm trying:
'[:find #_?date ?tx ?w ?a ?v ?op
:keys #_date tx db/id attr v op
:in $as-of $since ?workspaces-group ?start ?stop
:where
[$as-of ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w]
[$as-of ?w _ ]
[$since ?w _]
[?w ?a ?v ?op ?tx]
#_[?tx :db/txInstant ?date]
#_[?a :db/ident ?attr]]
and get
Nil or missing data source. Did you forget to pass a database argument?
Is there an example of this somewhere?#2020-04-2217:43marshallyou need to pass the “regular” db as well as the others (i believe#2020-04-2217:44marshallso :in $ $asof $since#2020-04-2217:44favilaI think it’s just that this clause doesn’t specify a db#2020-04-2217:44favila [?w ?a ?v ?op ?tx]#2020-04-2217:44marshalloh, right#2020-04-2217:44marshallyes that’s definitely why#2020-04-2217:45marshallthx @favila#2020-04-2217:45kennyBut what is supposed to go there?#2020-04-2217:45marshallwhich db value do you want that information from#2020-04-2217:45kennyI think both?#2020-04-2217:46marshallthen you’d need 2 clauses#2020-04-2217:46marshallone for each db#2020-04-2217:46marshalland you’ll only get datoms that are the same in both#2020-04-2217:48kennyThis?
'[:find ?tx ?w ?a ?v ?tx ?op
:in $as-of $since ?workspaces-group
:where
[$as-of ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w]
[$since ?w ?a ?v ?tx ?op]
[$as-of ?w ?a ?v ?tx ?op]]
#2020-04-2217:49kennyDoesn't that only return datoms where ?a ?v ?tx ?op in both since and as-of are the same?#2020-04-2217:56kennyI'm pretty sure this is what I want:
'[:find ?tx ?w ?a ?v ?tx ?op
:in $as-of $since ?workspaces-group
:where
[$since ?w ?a ?v ?tx ?op]
[$as-of ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w]]
But I get
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
processing clause: (?w ?a ?v ?tx ?op), message: java.lang.ArrayIndexOutOfBoundsException
#2020-04-2217:57kennyNot really sure what that exception means. Here's a larger stacktrace:
clojure.lang.ExceptionInfo: processing clause: (?w ?a ?v ?tx ?op), message: java.lang.ArrayIndexOutOfBoundsException {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "processing clause: (?w ?a ?v ?tx ?op), message: java.lang.ArrayIndexOutOfBoundsException", :dbs [{:database-id "f3253b1f-f5d1-4abd-8c8e-91f50033f6d9", :t 105925, :next-t 105926, :history false}]}
at datomic.client.api.async$ares.invokeStatic(async.clj:58)
at datomic.client.api.async$ares.invoke(async.clj:54)
at datomic.client.api.sync$unchunk.invokeStatic(sync.clj:47)
at datomic.client.api.sync$unchunk.invoke(sync.clj:45)
at datomic.client.api.sync$eval50206$fn__50227.invoke(sync.clj:101)
at datomic.client.api.impl$fn__11664$G__11659__11671.invoke(impl.clj:33)
at datomic.client.api$q.invokeStatic(api.clj:350)
at datomic.client.api$q.invoke(api.clj:321)
at datomic.client.api$q.invokeStatic(api.clj:353)
at datomic.client.api$q.doInvoke(api.clj:321)#2020-04-2218:00kennyGot it. See the duplicate :find here:
'[:find ?tx ?w ?a ?v ?tx ?op
:in $as-of $since ?workspaces-group
:where
[$since ?w ?a ?v ?tx ?op]
[$as-of ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w]]
That's a nasty error message though 🙂#2020-04-2218:01favilawhat is the set of ?w you are interested in?#2020-04-2218:02favilathose currently monitored only? or ones that were ever monitored?#2020-04-2218:02kennyI want all ?w added or retracted between 2 dates that were on the :aws-workspaces-group/monitored-workspaces card many ref attr.#2020-04-2218:03favilathe confusing here is there are two different entity histories to consider#2020-04-2218:03kennyThis query gives me some results
[:find ?w ?a ?v ?tx ?op
:in $as-of $since ?workspaces-group
:where
[$as-of ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w]
[$since ?w ?a ?v ?tx ?op]]
It appears to be missing retractions.#2020-04-2218:04favilaare either of those history dbs?#2020-04-2218:04kennyNo. Called like this:
(d/q
'[:find ?w ?a ?v ?tx ?op
:in $as-of $since ?workspaces-group
:where
[$as-of ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w]
[$since ?w ?a ?v ?tx ?op]]
(d/as-of db stop-date)
(d/since db start-date)
[:application-spec/id workspaces-group-id])#2020-04-2218:05favilaso this gives you ?w that were monitored at the moment of stop-date, then looks for datoms on those ?w entities since start-date (if you make that $since a history-db)#2020-04-2218:06favilain particular, if there’s a ?w that used to be monitored between start and stop, you won’t see it#2020-04-2218:06favilais that what you want?#2020-04-2218:07kennyNo. I want ?w the used to be monitored between start and stop included as well.#2020-04-2218:07favilayou want ones that started to be monitored after start, or those that were monitored at start or any time between start and stop?#2020-04-2218:08kennyCorrect#2020-04-2218:08favila…so both?#2020-04-2218:08kennyYes#2020-04-2218:13favilaThen I think you need something like this:#2020-04-2218:13favila(d/q '[:find ?w ?a ?v ?tx ?op
:in $as-of $since ?workspaces-group
:where
[$as-of ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w]
[$since ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w _ true]
[$since ?w ?a ?v ?tx ?op]]
(d/as-of db start)
(-> db (d/history) (d/as-of end) (d/since start))
workspaces-group)
#2020-04-2218:13favilaas-of-start gets you whatever workspaces were already being monitored at start moment#2020-04-2218:14favilathen you look for groups again in $since for any that began to be monitored between start and end#2020-04-2218:14favila?w is now the set-union of both#2020-04-2218:15favilathen you look for any datoms added to ?w between start (not-inclusive) and end (inclusive)#2020-04-2218:16favilait’s possible you want to include ?start there too, in which case you need to decrement start-t of $since by one#2020-04-2218:16kennyWon't [$since ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w _ true] not work if ?workspaces-group was not transacted within start and stop?#2020-04-2218:16faviladoh you are right, this is unification not union#2020-04-2218:17faviladoing this efficiently might need two queries#2020-04-2218:17favilayou can’t use two different data sources in an or#2020-04-2218:20kennyWhy would this not work?
(d/q
'[:find ?w ?a ?v ?tx ?op
:in $as-of $since ?workspaces-group
:where
[$as-of ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w]
[$since ?w ?a ?v ?tx ?op]]
(d/as-of db stop-date)
(-> (d/history db) (d/as-of stop-date) (d/since start-date))
[:application-spec/id workspaces-group-id])#2020-04-2218:21favilaIt would miss ?w that were removed from workspaces-group between start and stop#2020-04-2218:21favilait’s the first choice I offered you earlier#2020-04-2218:21kennyOh, right. That query also hangs for 10+ seconds. Didn't let it finish.#2020-04-2218:21favilathis is only ?w that were part of the group at the very moment of end-date#2020-04-2218:23favilausing $since instead would miss ?w that were in the group at the moment of start-date#2020-04-2218:23kennySo perhaps query for all ?w at start-date and any added up to end-date. Pass that to a second query that uses (-> (d/history db) (d/as-of stop-date) (d/since start-date)) to get all datoms#2020-04-2218:25favilayes, so 3 queries#2020-04-2218:25kennyThe first one needs to be 2 queries, huh?#2020-04-2218:26favilamaybe you can unify later, let me think#2020-04-2218:27favila(d/q '[:find ?w ?a ?v ?tx ?op
:in $as-of $since ?workspaces-group
:where
[$as-of ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w-at]
[$since ?workspaces-group :aws-workspaces-group/monitored-workspaces ?w-since _ true]
(or-join [?w-at ?w-since ?w]
[(identity ?w-at) ?w]
[(identity ?w-since) ?w])
[$since ?w ?a ?v ?tx ?op]]
(d/as-of db start)
(-> db (d/history) (d/as-of end) (d/since start))
workspaces-group)
?#2020-04-2218:29kennySame problem as the other query, I think. ?workspaces-group isn't in $since#2020-04-2218:30favilayeah, and if I solved that, there would be the same problem with as-of#2020-04-2218:30kennyRight#2020-04-2218:30favilaugh, maybe a sentinel value, like -1#2020-04-2218:33favilaif you know that set will be small across all time, you could filter by ?tx like you were doing before#2020-04-2218:34kennyUsing an unfiltered history db?#2020-04-2218:34favilayeah#2020-04-2218:34favilajust to get ?w#2020-04-2218:34favilayou still want the $since to get changes to ?w entities themselves#2020-04-2218:35kennyWhat would you consider small? < 1,000,00?#2020-04-2218:36favilaI would not consider that small…#2020-04-2218:36favilabut really “small” is just “this query is fast enough”#2020-04-2218:36favilaI wonder if we can step back#2020-04-2218:36favila[:application-spec/id workspaces-group-id]
#2020-04-2218:36kennyI think it will often be in the 10-50 thousand range.#2020-04-2218:36favilais that an immutable identifier?#2020-04-2218:37kennyYes#2020-04-2218:37kennyi.e., a lookup ref?#2020-04-2218:37favilaso once asserted on an entity, it is never retracted and never asserted on a different entity#2020-04-2218:37kennyRight#2020-04-2218:38kenny> it is never retracted
Unless the entity itself is retracted#2020-04-2218:42kennyWith 3 queries I'd do:
1. Query for all ?w that are monitored in as-of.
2. Query for all ?w added to monitored in since.
3. Pass the union of ?w in 1 and 2 to a history db and get all the datoms#2020-04-2218:43favilacorrect; the db in 3 is either the same as 2 or just with a since adjusted 1 tx backward#2020-04-2218:43favila(depending on what you want)#2020-04-2218:46kennyShould 2 be querying a (-> db (d/as-of stop-date) (d/since start-date))?#2020-04-2218:47favilayes. the since sets an exclusive outer range#2020-04-2218:47favilatxs that occur exactly at start-date are excluded#2020-04-2218:48kennyIf none are added then that query will throw. Guess I just catch that and return an empty set.#2020-04-2218:49kenny> the db in 3 is either the same as 2 or just with a since adjusted 1 tx backward
Oh, right it would be the same. Since now we know all the ?w it's easy to search for the matching datoms.#2020-04-2218:49favilathat will omit changes to ?w that occurred exactly at start-date.#2020-04-2218:50favilathis difference should only happenmatter if you ever change ?w and group membership in the same tx#2020-04-2218:54kennyHaha, right. Adjusting 1 tx back is easy though#2020-04-2219:44kennyWait the db for 2 needs to include retracts. If a workspace was retracted between start and end, it would not be included in query 3.#2020-04-2219:45kennyI think that just means changing the passed in db to be (-> (d/history db) (d/as-of stop-date) (d/since start-date))#2020-04-2219:47kennyI also don't think the lookup ref for :application-spec/id will be present in that db so I'll need to have the db/id for ?workspace-group#2020-04-2219:47favilayes, sorry I misread your earlier db constructor. it needs d/history#2020-04-2219:47favilayou can look up the id in the query#2020-04-2219:48kennyIn query 2?#2020-04-2219:48favilaboth?#2020-04-2219:49kennyI could do it in query 1. Since query 2 is filtered by as-of and since, I don't think the :application-spec/id attribute will be included since it would have been transacted before the since filter.#2020-04-2219:49kennyUnless there is some special condition for lookup refs#2020-04-2219:50favilaunsure how lookup refs are resolved with history or filtered dbs#2020-04-2219:50kennyi.e., this query would never return any results given :application-spec/id was transacted before start-date
(d/q '[:find ?w
:in $ ?workspaces-group-id
:where
[?workspace-group :application-spec/id ?workspaces-group-id]
[?workspace-group :aws-workspaces-group/monitored-workspaces ?w]]
(-> (d/history db) (d/as-of stop-date) (d/since start-date))
workspaces-group-id)
#2020-04-2219:51kennyAnd this throws:
(d/q '[:find ?w
:in $ ?workspaces-group
:where
[?workspace-group :aws-workspaces-group/monitored-workspaces ?w]]
(-> (d/history db) (d/as-of stop-date) (d/since start-date))
[:application-spec/id workspaces-group-id])#2020-04-2219:51kennySo I think that means I need the ?workspace-group db/id before I do query 2.#2020-04-2219:52favilabut it may not exist at that time, right?#2020-04-2219:52kennyWhich time?#2020-04-2219:52favilaas-of. the time for query 1#2020-04-2219:53favilaa group can be created and destroyed in between start and end time#2020-04-2219:53kennyAh. If ?workspace-group doesn't exist at time 1, we would never need to run this query#2020-04-2219:56kennyLanded here:
(defn get-workspaces-over-time2
[db workspaces-group-id start-date stop-date]
(let [group-db-id (:db/id (d/pull db [:db/id] [:application-spec/id workspaces-group-id]))
cur-ws (->> (d/q '[:find ?w
:in $ ?workspace-group
:where
[?workspace-group :aws-workspaces-group/monitored-workspaces ?w]]
(d/as-of db start-date) [:application-spec/id workspaces-group-id])
(map first))
added-ws (->> (d/q '[:find ?w
:in $ ?workspaces-group
:where
[?workspace-group :aws-workspaces-group/monitored-workspaces ?w]]
(-> (d/history db) (d/as-of stop-date) (d/since start-date))
group-db-id)
(map first))
all-ws (set (concat cur-ws added-ws))
datoms (d/q '[:find ?w ?a ?v ?tx ?op
:in $ [?w ...]
:where
[?w ?a ?v ?tx ?op]]
(d/history db) all-ws)]
datoms))
But I'm back to where I started 😞
processing clause: [?w ?a ?v ?tx ?op], message: java.util.concurrent.TimeoutException: Query canceled: timeout elapsed
#2020-04-2219:57favilaso you have a set of ?w at this point?#2020-04-2219:57kennyRight#2020-04-2219:57favilahow large is it?#2020-04-2219:57kenny874#2020-04-2219:58favilayour history db is unfiltered?#2020-04-2219:58kennyYes#2020-04-2219:59kennyUsing (-> (d/history db) (d/as-of stop-date) (d/since start-date)) hangs "forever". I've been letting it run since I sent the 874 message#2020-04-2219:59kennyIt also caused the datomic solo instance to spike to 2000% cpu 🙂#2020-04-2220:00favilawell, last resort you can use d/datoms :eavt for each ?w#2020-04-2220:00favilawith your filtered db#2020-04-2220:01kennyYeah... That results in ?w number of DB queries, right?#2020-04-2220:01favilayou can run them in parallel, but yes#2020-04-2220:02favilathey are lazily tailed though#2020-04-2220:02favilaqueries are eager, datom-seeking is lazy#2020-04-2220:02favilait could be the problem is result-set size#2020-04-2220:04favila(mapcat #(d/datoms filtered-history-db :eavt %) (sort all-ws))#2020-04-2220:04kennyHmm, ok. That is a potential solution. Thank you for working with me on this. It's been incredibly insightful.
Any idea why that last query is so expensive?#2020-04-2220:06kennyWhy'd you sort all-ws?#2020-04-2220:07favilait probably won’t make a difference, but it increases the chance the next segment (in between datom calls) is already loaded#2020-04-2220:08favila(the entire index is sorted, so fetching 1 2 3 4 5 is better than 5 2 1 4 3)#2020-04-2220:09favila> Any idea why that last query is so expensive?#2020-04-2220:09favilamy suspicion is the result set size is large#2020-04-2220:10kennyInteresting. A bit surprised by that. Would really like to know what's in there that would cause it to be so big 🙂 In this case it shouldn't be that big.#2020-04-2220:11favilawell if your instance ever calms down that mapcat will tell you for sure#2020-04-2220:12favilaI’m not saying it will be fast, but it will use almost no memory#2020-04-2220:12favila(just make sure you don’t hold the head on your client…)#2020-04-2220:19kennyDoing a count on it... Also hung. Must be huge.#2020-04-2220:20kenny748650#2020-04-2220:21kennyOh wow, there is definitely an attribute in there that gets updated all the time that is useless here.#2020-04-2220:22kennyThat one should probably even be :db/noHistory#2020-04-2220:29kennyI wonder if restricting the query to the attrs I'm interested would increase the perf.#2020-04-2220:29kennyAfter filtering out those high churn attrs, I get a coll of 576 datoms#2020-04-2220:31kennyWould need to pull the db-ids of all the attrs to filter since those are also transacted outside the between-db.#2020-04-2220:46favilawith a whitelist (or even blacklist) of attrs, you may be able to retry your query#2020-04-2220:47favilai.e. not use datoms#2020-04-2220:51kennyWeird error doing that:
processing clause: {:argvars nil, :fn #object[datomic.core.datalog$expr_clause$fn__23535 0x11f3ef5d "#2020-04-2217:02Cas ShunI would like to find entities with an (cardmany) attribute with more than one value. A theoretical example is finding customers with more than n orders. What's the best way to go about this? Note - using cloud#2020-04-2217:14favila[?e ?card-many-a ?v] [?e ?card-many-a ?v2] [(!= ?v ?v2)]#2020-04-2217:28Cas ShunI just get [] when trying this, so maybe I'm misunderstanding something.
I just tried with the mbrainz database (to use a public dataset) to do something like find tracks with multiple artists (:track/artists is cardmany ref).
(d/q '[:find ?e
:where
[?e :track/artists ?a]
[?e :track/artists ?a2]
[(!= ?a ?a2)]]
db)
I'm new to Datomic and trying to learn, so I believe I am missing some knowledge here maybe?#2020-04-2217:47favilaare you sure db is what you think it is? are you sure any track actually has multiple artists?#2020-04-2217:48favilaHere’s a minimal example:
(d/q '[:find ?e
:where
[?e :artist ?v]
[?e :artist ?v2]
[(!= ?v ?v2)]]
[[1 :artist "foo"]
[2 :artist "bar"]
[2 :artist "baz"]])
#2020-04-2218:00Cas ShunI'm sure there are multiple artists on some tracks, and I know of a few tracks specifically.#2020-04-2218:00Cas Shunthe official cloud docs even have an example showing multiple artists on a track#2020-04-2218:02Cas Shunhowever, your example returns []#2020-04-2218:28favila?#2020-04-2218:28favila(d/q ’[:find ?e
:where
[?e :artist ?v]
[?e :artist ?v2]
[(!= ?v ?v2)]]
[[1 :artist “foo”]
[2 :artist “bar”]
[2 :artist “baz”]])
=> #{[2]}#2020-04-2218:28favila(from my repl)#2020-04-2314:53Cas ShunThis query doesn't work for me at all. Is this an on-prem thing?#2020-04-2314:57favilaI don’t think so? what happens?#2020-04-2314:59favilaoh, I bet it needs some kind of db somewhere in the data sources to know where to send the query#2020-04-2314:59favilahm, not sure how I feel about that#2020-04-2314:59favilatry this:#2020-04-2315:00favila(d/q ’[:find ?e
:in $ $db :where
[?e :artist ?v]
[?e :artist ?v2]
[(!= ?v ?v2)]]
[[1 :artist “foo”]
[2 :artist “bar”]
[2 :artist “baz”]] some-db)#2020-04-2315:00favilait shouldn’t matter what db you provide since it’s not read#2020-04-2315:01favilaI was just trying to demonstrate in a low-effort, db-agnostic way that the self-join should work#2020-04-2316:00Cas ShunUnable to resolve symbol: "foo" in this context
#2020-04-2316:15favilathat sounds like copy-paste error?#2020-04-2217:04ghadi@kenny use the datoms API#2020-04-2217:05ghadihttps://docs.datomic.com/client-api/datomic.client.api.html#var-datoms#2020-04-2217:15favilaAm I right that datomic cloud query doesn’t let you look at the log? (tx-ids, tx-data)#2020-04-2217:20marshalllog-in-query is not in the client API
You can use tx-range, however:
https://github.com/cognitect-labs/day-of-datomic-cloud/blob/master/tutorial/log.clj#2020-04-2217:19kennyHmm. So this would require some sort of iterative approach? I'd need to query for the tx id for my start and end dates and the filter the :aevt index for datoms within the tx id range. Using that result, for all entity ids returned, I'd filter the :eavt for tx ids between my start and end dates. I would then resolve all attribute ids, giving me my list. Is this what you were thinking @ghadi?#2020-04-2222:26joshkhis the cloud async client library useful for optimising some function which combines the results of more than one parallel query?#2020-04-2300:09Sam DeSotaI don't see it documented, but it appears that you can only use the d/tx-range to map over 1000 datoms at a time? Is this correct? Requesting on a datomic cloud database with about 4 million datoms, where 13194139533312 is my first txid:
(count (into [] (d/tx-range (conn) {:start 13194139533312 :end nil}))) ;; => 1000
#2020-04-2300:14Sam DeSotaAlso, this behavior applies to ts
(count (into [] (d/tx-range (conn) {:start 0 :end 4000}))) ;; => 1000#2020-04-2300:27Sam DeSotaI fixed via
(defn infinite-tx-range [conn {:keys [start end]}]
(let [current-end (+ start 1000)]
(if (and (some? end) (>= current-end end))
(d/tx-range conn {:start start :end end})
(lazy-cat (d/tx-range conn {:start start :end current-end})
(infinite-tx-range conn {:start (+ start 1000) :end end})))))#2020-04-2300:28Sam DeSotaDefinitely feels like a bug.#2020-04-2300:32Joe LaneLook at the namespace docstring https://docs.datomic.com/client-api/datomic.client.api.html you need to specify :limit -1 along with :start and :end . Example:
(count (into [] (d/tx-range (conn) {:start 0 :end 4000 :limit -1}))) ;; => 4000#2020-04-2300:34Sam DeSotaAh, got it. Thank you very much.#2020-04-2314:15Sam DeSotaI noticed that my datomic tx count was growing faster than I expected, after inspecting the tx log, there appears to be random no-op transactions a few times per second:
[#datom[13194144633312 50 #inst "2020-04-22T23:19:36.303-00:00" 13194144633312 true]]
[#datom[13194144633313 50 #inst "2020-04-22T23:19:36.549-00:00" 13194144633313 true]]
[#datom[13194144633314 50 #inst "2020-04-22T23:19:36.771-00:00" 13194144633314 true]]
[#datom[13194144633315 50 #inst "2020-04-22T23:19:37.336-00:00" 13194144633315 true]]
[#datom[13194144633316 50 #inst "2020-04-22T23:19:38.186-00:00" 13194144633316 true]]
[#datom[13194144633317 50 #inst "2020-04-22T23:19:38.919-00:00" 13194144633317 true]]
[#datom[13194144633318 50 #inst "2020-04-22T23:19:39.696-00:00" 13194144633318 true]]
[#datom[13194144633319 50 #inst "2020-04-22T23:19:40.024-00:00" 13194144633319 true]]
#2020-04-2314:15Sam DeSotaI noticed that my datomic tx count was growing faster than I expected, after inspecting the tx log, there appears to be random no-op transactions a few times per second:
[#datom[13194144633312 50 #inst "2020-04-22T23:19:36.303-00:00" 13194144633312 true]]
[#datom[13194144633313 50 #inst "2020-04-22T23:19:36.549-00:00" 13194144633313 true]]
[#datom[13194144633314 50 #inst "2020-04-22T23:19:36.771-00:00" 13194144633314 true]]
[#datom[13194144633315 50 #inst "2020-04-22T23:19:37.336-00:00" 13194144633315 true]]
[#datom[13194144633316 50 #inst "2020-04-22T23:19:38.186-00:00" 13194144633316 true]]
[#datom[13194144633317 50 #inst "2020-04-22T23:19:38.919-00:00" 13194144633317 true]]
[#datom[13194144633318 50 #inst "2020-04-22T23:19:39.696-00:00" 13194144633318 true]]
[#datom[13194144633319 50 #inst "2020-04-22T23:19:40.024-00:00" 13194144633319 true]]
#2020-04-2314:16Sam DeSotaJust double checking, is this normal behavior?#2020-04-2314:16favilaIt’s normal behavior…for an application that is transacting a few times times per second 🙂#2020-04-2314:17Sam DeSotaRight, but these txs have no datoms besides txInstant, and I'm probably not transacting that often. So probably a bug on my end?#2020-04-2314:17favilayes#2020-04-2314:18Sam DeSotaGot it, thank you#2020-04-2314:18favilayou should check before you submit the tx whether your tx-data is empty#2020-04-2314:18faviladatomic won’t drop a transaction--every time you call d/transact it will transact at least tx-instant, or fail#2020-04-2314:19favilait’s also possible to submit non-empty tx that ends up not changing anything. that would also look like an empty tx#2020-04-2314:19favilae.g. if you reassert a datom that is already there#2020-04-2314:20Sam DeSotaAh interesting, that's helpful#2020-04-2315:45Sam DeSotaWorking adding some monitoring for the issue above ^ but all the (cast/dev) calls break with an error like this, was only able to find one older slack message in the archive, but there was no resolution for the issue. Any hints?
> (cast/dev {:msg "Test"})
No implementation of method: :-dev of protocol: #'datomic.ion.cast.impl/Cast found for class: nil#2020-04-2315:46marshallcast/dev does not cast in production
https://docs.datomic.com/cloud/ions/ions-monitoring.html#dev
You’d need to redirect cast/dev before calling it#2020-04-2315:47marshallhttps://docs.datomic.com/cloud/ions/ions-monitoring.html#local-workflow#2020-04-2315:47marshallif you’re running in an active ion, use cast/event instead#2020-04-2315:51Sam DeSotaGot it. Will cast/dev break in a REPL? Both cast/dev + cast/event break with a similar error locally in a REPL. Want to make sure it won't break my production ion.
> (cast/event {:msg "CodeDeployEvent"})
No implementation of method: :-event of protocol: #'datomic.ion.cast.impl/Cast found for class: nil#2020-04-2315:53marshallsomething else is going on there
do you have datomic.ion.cast in your require and also check versions you’re using#2020-04-2315:58Sam DeSotaThis is my setup, checking out latest versions
;; versions
com.datomic/ion {:mvn/version "0.9.35"}
com.datomic/client-cloud {:mvn/version "0.8.81"}
;; ns
(ns my.util
(:require [datomic.client.api :as d]
[datomic.ion :as ion]
[datomic.ion.cast :as cast]))
(defn transact [& args]
(cast/event {:msg "CodeDeployEvent"})
(apply d/transact args))#2020-04-2316:02Sam DeSotaThese appear to be latest versions in ion starter#2020-04-2316:05Sam DeSotaWeird, isolated deps and loaded just this namespace and still having the same problem#2020-04-2316:15Sam DeSotaOkay, so I guess event/cast just doesn't work locally at all? I just cloned ion starter and same error.#2020-04-2316:15Sam DeSotaI guess I have to throw up a test endpoint to see if it works in ions#2020-04-2316:39Sam DeSotaIn case anyone else runs into this, cast/event does not appear to work locally, though perhaps that can be fixed with https://docs.datomic.com/cloud/ions/ions-monitoring.html#local-workflow. When deploying to ions, it is able report to cloud watch correctly.#2020-04-2316:42Sam DeSotaYes, calling (cast/initialize-redirect :stdout) fixes the issue locally.#2020-04-2316:59onetomWhy does the groupping happens differently in my :2 & :3 examples?
(let [names [[1 "Jane"] [2 "JaNe"] [3 "JANE"]
[4 "paul"] [5 "Paul"]
[6 "EVE"]
[7 "bo"]]
q #(->> (d/q % names)
(sort-by (comp count second)))]
(pp/pprint
{:1
(->> names
(group-by (comp str/upper-case second))
(vals)
(map set)
(sort-by count)
#_(filter (comp pos? dec count)))
:2
(q '[:find ?upcase-name (distinct ?id+name)
:in [?id+name ...]
:where
[(untuple ?id+name) [?id ?name]]
[(clojure.string/upper-case ?name) ?upcase-name]])
:3
(q '[:find (distinct ?id+name)
:with ?upcase-name
:in [?id+name ...]
:where
[(untuple ?id+name) [?id ?name]]
[(clojure.string/upper-case ?name) ?upcase-name]])}))
output is:
{:1
(#{[6 "EVE"]}
#{[7 "bo"]}
#{[5 "Paul"] [4 "paul"]}
#{[1 "Jane"] [2 "JaNe"] [3 "JANE"]}),
:2
(["BO" #{[7 "bo"]}]
["EVE" #{[6 "EVE"]}]
["PAUL" #{[5 "Paul"] [4 "paul"]}]
["JANE" #{[1 "Jane"] [2 "JaNe"] [3 "JANE"]}]),
:3
([#{[6 "EVE"] [1 "Jane"] [5 "Paul"] [2 "JaNe"] [3 "JANE"] [4 "paul"]
[7 "bo"]}])}
i would expect
(q '[:find (distinct ?id+name) :with ?upcase-name ...
and
(q '[:find (distinct ?id+name) ?upcase-name ...
form groups the same way#2020-04-2317:04favilaI encountered this recently too. feels like a bug?#2020-04-2317:10onetomim pondering over this for more than an hour.
read the related docs in https://docs.datomic.com/on-prem/query.html#with a few times, but i don't see any mistakes i'm making, so yes, feels like a bug to me too.
where and how can i report it?#2020-04-2317:16favilahopefully it gets visibility here, but opening a support ticket is a guaranteed way to get attention#2020-04-2317:16favilahttps://support.cognitect.com/hc/en-us/requests/new#2020-04-2317:25onetomthanks!#2020-04-2317:26onetomi've also seen situations where using the set function as an aggregate behaved differently than using distinct.
it feels like a related issue maybe.
have u seen anything like that?
should they not be the same from a functional perspective?#2020-04-2317:33onetomhere is a more minimal example for other who might also want to play with it:
(let [names ["a" "A" "b"]]
[(-> '[:find (distinct ?name) ?upcase-name :in [?name ...]
:where [(clojure.string/upper-case ?name) ?upcase-name]]
(d/q names))
(-> '[:find (distinct ?name) :with ?upcase-name :in [?name ...]
:where [(clojure.string/upper-case ?name) ?upcase-name]]
(d/q names))])
=> [[[#{"a" "A"} "A"] [#{"b"} "B"]] [[#{"a" "b" "A"}]]]#2020-04-2317:41onetomSubmitted the issue as https://support.cognitect.com/hc/en-us/requests/2668#2020-04-2322:02donyormSo I'm trying to automate deployments with Amazon Codebuild, and having a working deploy script (it runs fine on my local machine). However, when it runs on the codebuild server I get the following error: Error building classpath. Could not find artifact com.datomic:ion:jar:0.9.35 in central () . I can download the exact zip used by codebuild and run the script fine on my local machine. Why would clojure-cli not know to look for the ion jar in datomic's repo?#2020-04-2322:06Alex Miller (Clojure team)It probably is - the error just reports the last place it looked#2020-04-2322:07Alex Miller (Clojure team)I think this is actually maybe a known issue with code build though#2020-04-2322:07Alex Miller (Clojure team)Where code build can’t see stuff in a different region or different vpn or something#2020-04-2322:08donyormHuh any chance you know a workaround? I suppose this isn't strictly necessary, but it would be nice#2020-04-2322:09Alex Miller (Clojure team)They’ve run into this on the Datomic team iirc#2020-04-2322:09Alex Miller (Clojure team)I’m not remembering the details#2020-04-2322:10Alex Miller (Clojure team)Don’t think they’re available rn#2020-04-2322:10donyormI think I found the issue (https://stackoverflow.com/questions/48984763/aws-codebuild-cant-access-maven-repository-on-github), thanks for the hint that it was codebuild's fault#2020-04-2322:19marshallhttps://forum.datomic.com/t/ions-push-deployments-automation-issues/715/5#2020-04-2322:46donyorm@U05120CBV unfortunately, I'm running this codebuild in us-east-1, so I guess it's a different issue?#2020-04-2405:01tatutI just had this issue and ended up packaging my own ~/.m2 repo (with just com/datomic included) in a private s3 bucket, downloading and extracting that in the codebuild#2020-04-2405:02tatutit is really unfortunate workaround but I couldn't get access to the datomic releases, even in the same region#2020-04-2408:11stijnAre your permissions for Codebuild setup correctly? You need either Administrator access or add this to an IAM policy that is attached to the codebuild instance profile:
{
"Sid": "DatomicReleasesAccess",
"Effect": "Allow",
"Action": "*",
"Resource": [
"arn:aws:s3:::datomic-releases-1fc2183a/*",
"arn:aws:s3:::datomic-releases-1fc2183a"
]
}
#2020-04-2415:21donyorm@U0539NJF7 That's probably it. I'll look into it.#2020-04-2415:54donyormYep that seemed to do it. Thanks, stijn#2020-04-2405:03tatutI'm trying to access datomic via codebuild for db tests, but I can't create endpoint for vpc https://docs.datomic.com/cloud/operation/client-applications.html#create-endpoint (is LoadBalancerName not available in solo topology?)#2020-04-2412:42vlaaadLets suppose i have an entity and a bunch of txs that touch that entity. What would be an efficient query to pull a bunch of data from this entity at these timepoints too see how it looked throughout its life? Or is (map #(d/pull (d/as-of db %) '[*] e) txs) the only way?#2020-04-2412:49vlaaadI’m using cloud by the way#2020-04-2412:49vlaaadso I would guess every d/pull is a separate request?#2020-04-2413:14marshall@vlaaad you could query for everything about the entity from a history db#2020-04-2413:16vlaaadbut that’s another thing, I’m not interested in changes, I want full state at point in time#2020-04-2413:17marshallfull state at a point in time definitely sounds like as-of#2020-04-2413:18vlaaadYeah, and I wonder if there is a way to query for state of an entity at different time-points#2020-04-2413:19vlaaadlike [:find (pull $ ?e [*]) :in [[$ ?e] ...]]#2020-04-2413:19vlaaadso it is a single request to server instead of multiple requests#2020-04-2413:19vlaaadif it is multiple requests? hard to tell without source code…#2020-04-2413:24marshallcloud or on-prem?#2020-04-2413:26vlaaadcloud#2020-04-2413:27vlaaadI tried with more entites, and using multiple d/pull + d/as-of IS a N+1 problem: it gets more and more slow, so I guess it performs multiple requests#2020-04-2413:15marshallhttps://github.com/cognitect-labs/day-of-datomic-cloud/blob/751618ff7526c956bd7d5558a2698eda369cee4f/tutorial/filters.repl#L108#2020-04-2413:16marshalldepending on what you’re looking for, you can also use tx-range: https://github.com/cognitect-labs/day-of-datomic-cloud/blob/751618ff7526c956bd7d5558a2698eda369cee4f/tutorial/log.clj#L50#2020-04-2416:42donyormIs there a reference anywhere to what permissions a user/role needs in order to push and deploy for ions?#2020-04-2417:34Joe Lane@U1C03090C https://docs.datomic.com/cloud/operation/access-control.html#org1c35561#2020-04-2417:36donyormThat seems to be more related to accessing the database itself, rather than just pushing ions. I don't need to give this role access to the database, it just needs to deploy ions. Does that still require being a datomic administrator?#2020-04-2416:51hadilsCloud question: when a new EC2 instance is started, is a new transactor created? Or is there one transactor for the whole system, regardless of the number of EC2 instances? I am curious if I need to track machine ids and so forth to figure out who is writing to parts of the database. I am probably overthinking this.#2020-04-2416:53marshall@hadilsabbagh18 you’re definitely overthinking it 🙂
there is no single transactor in Cloud
All nodes of the primary compute group can perform writes#2020-04-2416:53marshalltraffic is all routed through the load balancer#2020-04-2416:53marshalland uses consistent hashing and sticky sessions#2020-04-2416:54marshallto route requests from the same client and/or about specific DBs to particular nodes#2020-04-2416:54marshallbut that is strictly a performance optimization#2020-04-2416:54marshallany node in the group is capable of handling writes to any db#2020-04-2416:57hadilsThanks @marshall. When a new EC2 instance is started, doesn't my code start on it as well? Isn't there a potential for two servers to work on the same datoms in the database? My code is multi-threaded so there are processes that may replicate work on different EC2 isntances if they are running. If that is the case, then I need to track who is doing what, right?#2020-04-2416:58marshallno. individual transactions are still serialized via coordination with storage#2020-04-2416:58marshallyou may need to consider that separate nodes may try to do things “at the same time”#2020-04-2416:58marshallbut that is no different than multithreaded db access in any system#2020-04-2416:59hadilsAh. Thanks! I can handle this case without machine ids, etc. Thanks a lot @marshall!#2020-04-2416:59marshalli.e. use compare-and-set, optimistic concurrency, etc#2020-04-2417:02hadils@marshall another question. I know that the lambda functions are actually proxies. Do they scale out and spawn separate processes within the EC2 pool when load becomes high?#2020-04-2516:28marciolHi all. I'm thinking in increase our usage of Datomic, but I have some doubts about patterns of usage in a distributed microservices setting.
It's common to see in the wild Datomic as the souce of truth and the final place where all our data should live.
There are a set of good practices related to persistence layer with the microservices approach, and one of them is to set a database per bounded context to avoid coupling, but seems that doesn't apply when using Datomic, given that Datomic allows distributed peers.
Can anyone shed more light on this subject. Blog posts and articles are very welcome.#2020-04-2523:14marciolI found this great article from @U7JK67T3N
https://theconsultingcto.com/posts/datomic-with-terraform/#2020-04-2720:52bhurlowFWIW I know that nubank deploys a datomic instance per microservice#2020-04-2800:56eraadI believe Datomic Cloud is optimized to work with one database. There is no need to shard or divide your application in multiple databases.
Per my understanding, microservices architectures with physically separated databases are needed because of technological constraints related to scalability.
With Datomic, you should not worry about that because it is already optimized for all kinds of data access patterns. Check these for further technical recommendations about those patterns: https://docs.datomic.com/cloud/best.html
Regarding domain bounded contexts, I believe these should be enforced at the code level. If you have diferent traffic patterns for your applications, you can use query groups for example.
This style of architecture is a bit different from the “common knowledge” out there that couples domain modeling of bounded contexts with technology/scalability contraints of specific database technologies.
Anyways, I recommend you stick to one database and enforce your bounded contexts at code level. If you need more, checkout this planning strategies:
https://docs.datomic.com/cloud/operation/planning.html#2020-04-2814:37marciol@U0FHWANJK I have talked to a person that works there and he said that they deploy one datomic per bounded context#2020-04-2814:38bhurlow@U28A9C90Q yea I recall the same. I believe they have a "template" for starting a microservice which installs a datomic on-prem instance and s3 bucket per serivce#2020-04-2814:42marciolHere at PayGo, a company which the main responsability is deal with payments in the ecosystem of C6 Bank (somewhat Nubank competing) we are using Clojure and Datomic in some services, and we are trying to build something like a template as well, it is on my list right now.#2020-04-2814:47marciolThank you @U061BSX36, it is apparent more and more that with Datomic other kinds of patterns are needed.
In order to not lose the advantage to have the database in our application process, as we have today using Datomic on-prem where each application is a peer, we need to plan how make the application run using the datomic ions strategy.#2020-04-2814:48marciolWe are still thinking in pros and cons of this approach.#2020-04-2814:49eraadNice, good way of thinking about it.#2020-04-2814:50marciolRegarding the usage patterns, I wonder if it is possible to an application that depends of multiple databases to make a query joining other databases, as described by @U09K620SG in this article:
http://www.dustingetz.com/:datomic-myth-of-slow-writes
> The problem with place-oriented stores is that sharding writes forces you to shard your reads to those writes.#2020-04-2814:52marciolNubank uses Pathom so they do almost the same, but relaying on each service to get from database a specific part of the data, aggregating all this data after that.#2020-04-2814:54Dustin GetzDoes Ions have multi-db queries? I thought Cognitect quietly turned that off shortly after Datomic Cloud release, not sure if they ever turned it back on with Ions#2020-04-2814:55marciolYes, but with datomic on-prem we can use multi-db queries. I’m just wonder about how the application will behave regarding memory usage, latency, etc etc#2020-04-2814:56marciolOr just use Pathom to obtain the same result#2020-04-2814:56Dustin GetzFor on-prem, the databases will compete for object cache in the peer#2020-04-2814:58marciolYes, it is what I thought, this can happen even with one database, depending of amount of data and usage patterns, as we can read on this awesom post from Nubank:
https://medium.com/building-nubank/the-evergreen-cache-d0e6d9df2e4b#2020-04-2815:07marciolSo I’ll avoid future problems giving up what would be a fantastic feature 😅#2020-04-2815:07marciolunless someone change my mind 😄#2020-04-2815:35marciolBased on what @U061BSX36 said, I think that the smart move is to avoid multiples databases, and only break into it if:
1. You hit the write limit throughput of one transactor,
2. The amount of data is so huge that you start to experiment some issues related to object cache space.
Can you confirm this usage pattern:
cc: @U061BSX36 @U09K620SG @U072WS7PE @U05120CBV @val_waeselynck#2020-04-2815:45marshallon-prem or cloud?#2020-04-2815:49marciolon-prem at first @U05120CBV but we are evaluating cloud as well#2020-04-2815:51marciolbut one additional question @U05120CBV:
is it possible to avoid shard at datomic cloud? What is the strategy when data grows really big?#2020-04-2815:53marshallIn on-prem you should run a single primary logical DB per transactor. However, in Cloud multiple DBs per system is fine.#2020-04-2815:54marshallcan you define “really big”?#2020-04-2816:07marciolThinking about the limit of one Datomic Database instance being 11 Billion of Datoms, what corresponds to 353 datoms per second, we are planning to get on some our transaction system approximately 10% of this number.#2020-04-2816:08marciolSo what I consider “really big” is not that big in Datomic standard#2020-04-2816:09marshallThere is no hard limit on db size
The 10B number is a guideline around when you need to consider options for handling volume, shards, etc
if you’re unlikely to hit 10B datoms in 3 to 5 years, then i wouldn’t worry about it#2020-04-2816:15marciolSeems the case @U05120CBV, but I have another question regarding the architectural aspect: Is it possible to use a single primary logical DB to handle in a unified way all my data, even within a distributed services setting?
Sometimes, according “common knowledge”, as pointed by @U061BSX36 is the way to go with multiple databases, but seems to me that this can be different when using Datomic. It will really awesome to concentrate all your data in one place.#2020-04-2816:16marshallit depends a lot on your particular system needs, architecture, etc#2020-04-2816:16marshallthere is no right or wrong answer#2020-04-2816:19marshallthere are definitely advantages to a central single db#2020-04-2816:20marshalllikewise, there are lifecycle advantages to individual services having their own dbs#2020-04-2816:20marshalli would assess the tradeoffs to the different options and determine which fits your particular system needs best#2020-04-2816:22marciolI need to isolate my individual bias towards monolithic applications or “modular monoliths” as some name it, in order to do the best assessment#2020-04-2816:23marciolBut it is really fantastic that Datomic offers a larger range of options#2020-04-2818:08marciolBtw, very good article @U0C4ECS1K#2020-04-2518:33alidlorenzoHey y’all I’m working on two libs as I build my Datomic API
one to manage AWS infrastructure: https://github.com/rejure/infra.aws
another to manage schema accretions: https://github.com/rejure/dation
both are intended for Datomic Cloud, try to make it easier to create configurations using EDN, and overtime will (hopefully) provide more utilities for managing aws infrastructure and database attributes/migrations, respectively
feedback is welcome 🙂 feel free to open issue or discuss in #rejure channel I just created#2020-04-2518:34alidlorenzoon a related note, I have a question about Datomic accretions, unsure of what approach to take in above lib
from my understanding, Datomic transactions are idempotent so you could reinstall attributes every time on startup but sometimes you also need to migrate data, so it helps to have some control over process
currently I’m keeping a version number for schema that can be manually changed whenever schema/migration change is desired.
another approach I just read in one of @val_waeselynck’s post about Datomic is to reinstall schema on startup but track migrations* that have been run (so that unlike the schema, they’re not rerun).
i prefer this latter approach over version numbers, but i’m curious, the `ensure-schemas` example in day-of-datomic-cloud repo checks if a given schema attribute exists before reinstalling - is there a reason this approach was taken instead? are there considerations I’m not taking into account?#2020-04-2518:34alidlorenzoon a related note, I have a question about Datomic accretions, unsure of what approach to take in above lib
from my understanding, Datomic transactions are idempotent so you could reinstall attributes every time on startup but sometimes you also need to migrate data, so it helps to have some control over process
currently I’m keeping a version number for schema that can be manually changed whenever schema/migration change is desired.
another approach I just read in one of @val_waeselynck’s post about Datomic is to reinstall schema on startup but track migrations* that have been run (so that unlike the schema, they’re not rerun).
i prefer this latter approach over version numbers, but i’m curious, the `ensure-schemas` example in day-of-datomic-cloud repo checks if a given schema attribute exists before reinstalling - is there a reason this approach was taken instead? are there considerations I’m not taking into account?#2020-04-2521:35val_waeselynckNote that Datomic transactions are not idempotent in general (e.g [[:db/add "my-tempid" :person/age 42]] will always create a new entity, for lack of an identity attribute to induce upsert behaviour).#2020-04-2521:37val_waeselynckI only meant that schema installation transaction tend to be idempotent (e.g, creating a new attribute). So if you're a bit careful, you can usually just re-run your schema installation transaction, but it does require vigilance.#2020-04-2521:40val_waeselynckI don't know if that's what you read, but you might take inspiration from this: https://github.com/vvvvalvalval/datofu#managing-data-schema-evolutions
(won't work for Datomic Cloud, but shouldn't be too hard to port)#2020-04-2522:46alidlorenzothanks for clarifying that. I was reading the “Using Datomic in your App” article, implementation in linked repo seems to be similar, will take a look.
as is, datofu only works with on prem, right?#2020-04-2611:59val_waeselynckYes#2020-04-2714:25PBIs there a correlation between datomic peer memory and datomic transactor memory?#2020-04-2718:27stuarthallowayHi @petr! What do you mean by correlation?#2020-04-2718:30stuarthallowayThe peers must follow the in-memory transaction stream for all databases they are connected to, which is up to the memory-index-max setting (on the tranactor(s)!)#2020-04-2718:31stuarthallowayBut processes can make independent choices about total JVM, object cache, etc. so long as they work within that rule. This is partially described at https://docs.datomic.com/on-prem/capacity.html#peer-memory.#2020-04-2817:12joshkhi know this question is 10% Ions and 90% AWS, but maybe one of you experts can help me out. i'm trying to configure a CloudWatch metric (later to be used as an Alarm), which looks for my Ion's cast/alerts and cast/events. after setting up the metric, i have no results in my graph, even though i can find filtered matches in my log streams when i use the same filter pattern.
1. in CloudWatch, i find my datomic-<system-name> log group
2. i select the radio option and click the Create Metric button
3. i enter the Filter Pattern {$.Msg = "my-specific-cast-event"} (which works as a normal filter pattern when searching log streams)
4. i choose a Metric Namespace (i don't think it matters which one)
5. i click Create Filter, and the final result is an empty an empty graph#2020-04-2817:44Joe Lane@joshkh Are you in a solo topology?#2020-04-2817:44joshkhproduction#2020-04-2817:45Joe Lanehmm...
NOTE AWS will display all metrics in lowercase with the first character capitalized. As an example, the aforementioned :CodeDeployEvent will display as Codedeployevent in both the metrics and the logs. Additionally, CloudWatch Metrics do not have namespaces, and any namespace provided in the metric name will be ignored.
#2020-04-2817:45Joe LanePer https://docs.datomic.com/cloud/ions/ions-monitoring.html#metrics#2020-04-2817:48Joe LaneAre you trying to cast a metric from your local environment? I'm not sure that cast/metric works locally, it may have to be deployed. It's been a while since I've worked with these.#2020-04-2817:51joshkhhey no worries, and thanks for the input. the cast is coming from a deployed environment, and i can see them in my cloudwatch logs. i just can't seem to wrangle them in the CloudWatch Metrics (which i think are different from what Datomic calls Metrics)#2020-04-2817:53joshkhwait.. wait. i think you're totally on to something. thanks Joe!#2020-04-2817:53joshkh(i'm using cast/event, not cast/metric)#2020-04-2819:01Joe LaneYeah, you gotta use metric 🙂#2020-04-2817:37Willwhen I submit a retract transaction with this form:
[:db/retract entity-id attribute]
I get the following error:
Error printing return value (IndexOutOfBoundsException) at clojure.lang.PersistentVector/arrayFor (PersistentVector.java:158). null
my code specifically looks like:
(d/transact conn [[:db/retract 17592186123123 :entity/attribute]])
and the relevant schema looks like:
{:db/ident :entity/attribute
:db/valueType :db.type/tuple
:db/tupleTypes [:db.type/string :db.type/string]
:db/cardinality :db.cardinality/many}
I don't think the fact that the attribute is a tuple or cardinality many is relevant, I've tried retracting string valued attributes with cardinality one in the same way and gotten the same error.
Looking at the documentation here:
https://docs.datomic.com/on-prem/transactions.html#list-forms
it seems like I should not have to specify a value for a retraction and if the value is not specified it will retract all the attributes that match the supplied entity id and attribute.
Anyone have any thoughts?#2020-04-2817:41joshkhi think that functionality was introduced only in the very latest release of Datomic (13 Feb 2020). are you on the latest version?
https://docs.datomic.com/on-prem/changes.html#0.9.6045
https://docs.datomic.com/cloud/releases.html#616-8879 (cloud)#2020-04-2817:42Willah that'll be it, we're on 0.9.5981#2020-04-2817:43Will@joshkh thanks for the fast response!#2020-04-2817:44joshkhno problem! it's definitely a feature i thought would have existed for ages. but alas, we have it now 🙂#2020-04-2907:08robert-stuttaford@marshall @jaret hey guys 👋 we want to switch from Java 8 to Java 11, but when i start a pro-0.9.6024 transactor, I get this warning:
WARNING: Illegal reflective access by org.codehaus.groovy.reflection.CachedClass$3$1 (file:/Users/robert/Datomic/datomic-pro-0.9.6024/lib/groovy-all-1.8.9.jar) to method java.lang.Object.finalize()
i did try to find supported java versions on the datomic doc site but couldn't find anything. got any advice for me, please? thanks!#2020-04-2911:59favilaI’ve been told by Cognitect (never seen in docs though) that java 11 should work.#2020-04-2911:59favilathis error re groovy: it’s actually an unused dep and was removed in the next version 0.9.6045#2020-04-2911:59favila(you can see that in the changelog)#2020-04-2912:00favilayou can probably delete that jar file from libs if you don’t want to upgrade#2020-04-2912:47marshallAccurate ^#2020-04-2913:50robert-stuttafordoh ok beeauuutiful! thanks @U09R86PA4 and thanks for confirming @marshall!#2020-04-2915:25kennyIs there any documentation on what passing :repl-cmd to datomic.ion.dev/push does exactly? I would've thought it would allow me to run the push with additional aliases but it doesn't appear to have any effect.#2020-04-2915:27kennyMore to that point, how does push know which paths to include on the final classpath that will be uploaded to S3? Does it simply not include any aliases? If so, is there a way to have it include some aliases?#2020-04-2917:06kennyI am getting an alert with Datomic Cloud after deploying my Ion:
":datomic.cloud.cluster-node/-main failed: 'datomic/ion-config.edn' is not on the classpath"
No errors reported when pushing and deploying from the REPL. I can also slurp my ion-config.edn from the REPL:
clj -A:dev:ion-deploy
Clojure 1.10.1
user=> (require '[ :as io])
nil
user=> (slurp (io/resource "datomic/ion-config.edn"))
"{:allow [],\n :lambdas\n {:query-pricing-api\n {:fn cs.ions.pricing-api/lambda-handler,\n :description \"Query the pricing-api.\"}},\n :app-name \"datomic-import-test\"}\n"
I'm missing something between what is on the classpath locally and what it ends up deploying.#2020-04-2917:15kennyDoes ion push somehow take into account .gitignore?#2020-04-2917:21kennyAh, I think Ion push simply ignores all aliases.#2020-04-2919:19kennyWe're using a monorepo style project where lots of dependencies are all :local/root. Datomic Ions appear to require no :local/root deps even if the git repo is clean and all :local/root deps are within the same git repo. Is this a necessary constraint?#2020-04-2919:41Alex Miller (Clojure team)Local root deps are not in git repos from dep’s perspective#2020-04-2919:41Alex Miller (Clojure team)They are local unmanaged resources#2020-04-2919:45Alex Miller (Clojure team)It seems unlikely that Datomic team would infer this semantic over the top. I think the real place to work this problem is in tools.deps but it really requires a top down intent to address this monorepo use case and I don’t think that’s something likely to happen soon#2020-04-2919:47Alex Miller (Clojure team)As far as workarounds, I’m not sure all of the options available for ions#2020-04-2920:59Joe Lane@kenny Create a "runner" project which depends on specific git revision but then allows the deps to be overridden when you have a :local alias.
Example of this "runner" approach with ions is https://github.com/Datomic/ion-event-example-app which just composes https://github.com/Datomic/ion-event-example. You deploy the former. If you expanded on this style with many smaller ion modules/projects you can compose different ion libraries in any way you want.
I'm working on a reference application that demonstrates this by having various "services" (different apps like a health-tracker, a recipe app, a todo application, etc.) all deployed by the same "runner" which references each of these projects at a specific git sha and development is very smooth because in cursive I can create a multi-module project which allows me to edit my :local/root siblings at the same time but keep them in different git repos.#2020-04-2921:22kennyYeah, I suppose I could to that. Would involve creating a deps.edn that contains all of my sub-projects. Easy to do programmatically.#2020-04-2921:00Joe LaneI'll try to get around to sharing my example this weekend.#2020-04-2923:53kennyAny idea what I need to do to get cast/event to work locally?
(cast/event {:msg "Foo"})
Execution error (IllegalArgumentException) at datomic.ion.cast.impl/fn$G (impl.clj:14).
No implementation of method: :-event of protocol: #'datomic.ion.cast.impl/Cast found for class: nil#2020-04-3000:03Joe Lane@kenny https://docs.datomic.com/cloud/ions/ions-monitoring.html#local-workflow#2020-04-3000:04Joe LaneGotta call (cast/initialize-redirect :stdout) , or :stderr , or "somefile.log", or :tap.#2020-04-3000:16kennyOh yeah. Strange error message for that.#2020-04-3011:23joshkhhas anyone experienced the following exception from a deployed HTTP Direct ion project?
Uncaught Exception: .IOException: Too many open files
we see the exception shortly after the EC2 instance comes up, and once it happens the web server stops responding for good. the project does work with temporary files but very rarely and only on demand, so it's strange to see the exception shortly after the app starts.#2020-04-3012:00favila“Open files” can also mean file descriptors, meaning sockets. Do you make lots of tcp or http connections maybe?#2020-04-3012:12joshkhi think i found the problem. it looks like the error was coming from a function which created a new cognitect-labs/aws-api client in a let every time the function was called (which isn't a good practice, and now the client is def'ed). perhaps the client opens files, maybe for end point resolution or something?#2020-04-3012:40favilaI think it probably just opened a new http connection each time#2020-04-3013:30ghadi@U0GC1C09L can you list the version of the client and whether you pass anything to the constructor besides :api#2020-04-3013:35joshkhsure thing.
client version:
{com.cognitect.aws/api {:mvn/version "0.8.305"}}
constructor:
(aws/client {:api :kms})#2020-04-3011:25joshkhthe stack trace points to org.eclipse.jetty.util.component.ContainerLifeCycle & cognitect.http_client#2020-04-3012:42MarcusWhen using the client-pro api there is a function create-database. This requires the peer server to be running. But the peer server requires a database name (in the -d parameter) to run. How can I create a database with the client-pro api?#2020-04-3012:42MarcusDo I need to use the full peer api?#2020-04-3012:49favilayes, you need a peer to create the database; then you can run peer server#2020-04-3012:49MarcusOk. But what then is the use of create-database? to create subsequent databases?#2020-04-3012:50favilait’s for cloud (and other non-peer-server scenarios)#2020-04-3012:50favilanote the docs for create-database:https://docs.datomic.com/client-api/datomic.client.api.html#var-create-database#2020-04-3012:51Marcusah 🙂#2020-04-3012:51Marcusthanks 🙂#2020-04-3015:13tvaughanSorry if I missed this elsewhere, but is it permissable to provide a public docker image of Datomic on-prem (without a license key)?#2020-04-3016:37marshallno. you can’t distribute the bits of Datomic on-prem#2020-05-0113:39robert-stuttafordwhat could cause this to happen, @marshall @jaret? no ddb throttling at all, heartbeat totally stable, but all services started getting transaction timeouts, and as you can see on the graph, live index threshold stuck at full. first time we've ever seen this!#2020-05-0113:45jaret@robert-stuttaford would you be able to start a case and share logs, version, config settings? We’d be interested in looking at this in more detail.#2020-05-0113:46jaret<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>#2020-05-0113:47robert-stuttafordabsolutely#2020-05-0113:58robert-stuttafordmy colleague Geoff will mail soon!#2020-05-0113:58robert-stuttafordthanks @jaret#2020-05-0114:30potetmI’m curious what the answer is when ya’ll find it.#2020-05-0115:49BrianIs it possible to restore a database from a file on disk into an in-memory database? I'd like to write some tests for my code but don't want to point them towards the database on my system since that isn't portable. I know how to restore a database from the command line, what I am looking for is some Clojure code which demonstrates how to take an on-disk backup and restore it into an in-memory Datomic database for use in my tests. Thanks!#2020-05-0116:07kennyIs datomic.query.support a public api?#2020-05-0116:51ghadino#2020-05-0118:56micahAm pondering the use of :db/fulltext. It’d mean I have to copy data over to new attribute. Is fulltext query really much faster than [(.contains :attr/name ?q)]?#2020-05-0118:56micahAm pondering the use of :db/fulltext. It’d mean I have to copy data over to new attribute. Is fulltext query really much faster than [(.contains :attr/name ?q)]?#2020-05-2422:07kmyokoyamaOk, your screenshot answers that. Thank you, @U2TLBUVRS and @U0CJ19XAM!#2020-05-2500:54naomarikDoes the starter license also include ability to run in multiple environments (dev/staging/prod) with the same license?#2020-05-2502:01naomarikJust tried, looks I can.#2020-05-2513:35fmnoisehi everyone, is there any way to add new schema-level attributes (like db/doc) which would be shown in datomic console?#2020-05-2514:49naomarikDatomic console still works? On the latest datomic (1.0.6165) I get this error: ERROR: This version of Console requires Datomic 0.8.4096.0 to run#2020-05-2709:57eany experiences with migrating or mirroring data from one datomic cloud instance to another? is there some straightforward technique to copy/replay all transactions to a new instance, how would they be queried from the source db?#2020-05-2713:35Joe Lane@e https://docs.datomic.com/client-api/datomic.client.api.html#var-tx-range
(d/tx-range conn {:start nil :end nil :limit -1}) will get you started.#2020-05-2713:36Joe LaneIt's not a trivial transformation though.#2020-05-2714:35stuarthallowayWe are considering bumping the Clojure requirement for the Peer API. Please let us know your thoughts! https://forum.datomic.com/t/peer-api-clojure-version-poll/1469#2020-05-2716:26kennyIs it okay to publish a public docker image with the datomic-access script in it?#2020-05-2718:21currentoorAny plans on adding native support to Datomic for the java.time classes? Or just java.time.Instant?
Right now I’m storing everything (local dates, points in time, etc) as java.util.Date and converting to java.time.Instant to perform operations. Being able to read things out as java.time.Instant would mean a lot less conversion back and forth.#2020-05-2807:24tatutIf I migrate data serialized from another db instance to a new one, is it safe to transact them with the original :db/id numbers? or do I need to make them strings and do some mappings… (in datomic cloud)#2020-05-2808:18fmnoisenope, it's not safe, you should have some app level identities#2020-05-2808:28tatutok, thanks#2020-05-2812:13Lone RangerFINALLY got company to green light datomic and now that I'm using the real thing (vs free/datascript), I don't really have a good mental model for why I would choose the client API (`datomic.client.api`) vs datomic.api. Is this a philosophical thing or are they different tools for different mediums? Or are they just different tools in different bags that could be used for similar tasks?#2020-05-2812:26arohnerdatomic.api is only available for peers (on-prem), not cloud#2020-05-2812:29arohnerhttps://docs.datomic.com/on-prem/clients-and-peers.html#2020-05-2812:33Lone Rangerahhh perfect#2020-05-2813:12Lone RangerThank you! 🙇#2020-05-2813:10arohnerHow do queries with composite tuples work? Can you ‘destructure’ and query via the tuple, or is going through the source attributes the only way?#2020-05-2813:18marshall@arohner https://docs.datomic.com/cloud/query/query-data-reference.html#untuple
yes 🙂#2020-05-2813:18marshallyou can do it either way#2020-05-2813:18arohnerthanks#2020-05-2814:29arohnerdb.error/entity-attr Entity -9223301668109487396 missing attributes
#2020-05-2814:37arohnerIs that an error in the way I defined the spec, or a bug in datomic?#2020-05-2814:38ghadiplease provide your inputs#2020-05-2814:38ghadiboth the d/transact args and the specs#2020-05-2908:48arohner{:db/ident ::money/currency
:db/valueType :db.type/keyword
:db/cardinality :db.cardinality/one}
{:db/ident ::money/value
:db/valueType :db.type/bigdec
:db/cardinality :db.cardinality/one}{:db/ident ::money/money
:db/valueType :db.type/tuple
:db/tupleAttrs [::money/currency ::money/value]
:db/cardinality :db.cardinality/one}{:db/ident ::accounts/tx-item
:db.entity/attrs [::accounts/account-id ::money/money]}
(d/transact conn {:tx-data [#:griffin.proc.accounts{:tx-id #uuid “ddcc9e32-c65e-5024-b82a-be3a3324d496”, :tx-items #{{:db/ensure :griffin.proc.accounts/tx-item, :griffin.proc.accounts/account-id #uuid “35db4a29-0bcc-5614-b27e-920c5f31a3a4", :griffin.money/money [:GBP 0.00M]} {:db/ensure :griffin.proc.accounts/tx-item, :griffin.proc.accounts/account-id #uuid “c0d87137-2c39-520c-ad6d-aa529843c38f”, :griffin.money/money [:GBP 0.00M]}}}]})#2020-05-2911:55favilaAttributes with TupleAttrs are not meant to be written directly: they will be computed. This tx writes money/money, it should instead write currency and value#2020-05-2911:56favilaI don’t think composite attr updates flow back into the non-composite attrs #2020-05-2911:56favilaOnly the other way around#2020-05-2912:51arohnerThe official docs seem to do that:
[{:reg/course [:course/id "BIO-101"]
:reg/semester [:semester/year+season [2018 :fall]]
:reg/student [:student/email "#2020-05-2912:51arohnerIsn’t that year+season a write to a composite tuple by passing in a vector?#2020-05-2912:52favilano that is a lookup ref#2020-05-2912:53favila:reg/semester [:semester/year+season [2018 :fall]] will desugar to [:db/add "entity-temp-id" :reg/semester [:semester/year+season [2018 :fall]] which will resolve to an entity id with :semester/year+season equal to [2018 :fall]#2020-05-2912:53favila(or fail if no such entity)#2020-05-2912:54favilaI think your cryptic, horrible error message is :db/ensure complaining that the source tuples are not written#2020-05-2912:55favilai.e. that ::money/currency ::money/value were not asserted#2020-05-2912:55arohnerit works when I assert money/currency and money/value, thanks#2020-05-2912:56favilaNote this in the docs:#2020-05-2912:56favila> Composite attributes are entirely managed by Datomic–you never assert or retract them yourself. Whenever you assert or retract any attribute that is part of a composite, Datomic will automatically populate the composite value.#2020-05-2912:58favilaso in your example you were looking at in the docs, the earlier transaction `
{:semester/year 2018
:semester/season :fall}
is what wrote [:semester/year+season [2018 :fall]]#2020-05-2911:25arohnerhrm, it seems like my tuple write was failing, and I don’t understand why.#2020-05-2912:37dmarjenburghI'm trying to do an index-pull but running into a Datomic Client Exception:
clojure.lang.ExceptionInfo: Datomic Client Exception {:cognitect.anomalies/category :cognitect.anomalies/forbidden, :http-result {:status 403, :headers {"server" "Jetty(9.4.24.v20191120)", "content-length" "19", "date" "Fri, 29 May 2020 12:34:05 GMT", "content-type" "application/transit+msgpack"}, :body nil}}
at datomic.client.api.async$ares.invokeStatic(async.clj:58)
at datomic.client.api.async$ares.invoke(async.clj:54)
at datomic.client.api.sync$channel__GT_seq.invokeStatic(sync.clj:72)
at datomic.client.api.sync$channel__GT_seq.invoke(sync.clj:69)
at datomic.client.api.sync$eval20791$fn__20808.invoke(sync.clj:113)
at datomic.client.api.protocols$fn__11940$G__11875__11947.invoke(protocols.clj:126)
at datomic.client.api$index_pull.invokeStatic(api.clj:293)
at datomic.client.api$index_pull.invoke(api.clj:272)
I can query the db normally otherwise.#2020-05-2912:49favilaare you sure the target server supports it?#2020-05-2913:14dmarjenburghHaha, I was under the impression the upgrade was already deployed, but it was still in the pipeline :face_palm::skin-tone-3: . Works now.#2020-05-2912:50arohnerThe fn is either a fully qualified function allowed under the :xforms key in resources/datomic/extensions.edn, or one of the following built-ins:
I can’t find anything else in the docs that reference extensions.edn. Where can I learn more about that?#2020-05-2913:07faviladoubling down on this question, it’s also not clear to me whether the extension function needs to exist on the client’s classpath or the client-server’s classpath#2020-05-2913:08favilaor why this is necessary at all for the on-prem api#2020-05-2915:51marshallThe extensions.edn file needs to be available in the classpath at that relative path (`resources/datomic/extensions.edn`)#2020-05-2915:51marshallit needs to be there in the system that will be doing the work#2020-05-2915:51marshallso if you’re using peer, in the peer process#2020-05-2915:52marshallfor client, it needs to be in the cp of the peer-server process#2020-05-2915:52marshallif you’re using it inside a transaction function, it would need to be in the transactor cp#2020-05-2916:12favilaAnd it looks like {:xforms #{var/name ,,,}} ?#2020-05-2916:14marshalli believe the value is a vector (or list) of symbols#2020-05-2916:14marshallset may work too#2020-05-2916:16marshallhttps://docs.datomic.com/cloud/ions/ions-reference.html#ion-config#2020-05-2916:16marshallbased on cloud, I would say a vector of fully qualified symbols#2020-05-2916:16marshallI’ll look at adding that detail in onprem docs#2020-05-2915:45jaretHowdy! We just released a fix for Datomic On-Prem Console. The latest release had a bug that caused console to fail to start. https://forum.datomic.com/t/datomic-console-0-1-225-now-available/1472#2020-05-2915:45arohnerIs it possible to use a lookup ref in the same transact that creates the unique identity? It seems like the answer is no#2020-05-2915:45arohnerIs it possible to use a lookup ref in the same transact that creates the unique identity? It seems like the answer is no#2020-05-2915:49marshallNo, but you can use a tempid for that#2020-05-2915:57arohnerBut then I need to know whether the unique identity already exists or not#2020-05-2915:58marshalli think i’d need more detail
If you have one entity being asserted that has a unique ID and another that references it via tempid, Datomic’s entity resolution should handle that correctly whether or not the entity with the unique ID already exists or not. If it does, it will become an upsert, if it doesn’t it will be created#2020-05-2916:17favilaI’m guessing from our earlier conversation that Allen wants to use this with a unique-identity composite attr. I think this doesn’t work unless you assert the composite. e.g. {:db/id "tempid" :attr-a 123 :attr-b 456} where the upsert attr is :attr-a+b#2020-05-2916:18marshallyes, agreed if you’re upserting you need to include the :attr-a+b in the transaction#2020-05-2920:07arohnerAFAICT, it doesn’t work with a scalar unique attribute either#2020-05-2920:10marshallCan you provide your txn data and results you see not working?#2020-05-2920:13arohnerThe code is kind of lengthy and it’s late here (London).
I’m trying to build a ledger. When inserting transaction items:
{:db/ensure ::accounts/tx-item
::accounts/account [::accounts/account-id (::accounts/account-id i)]
::money/currency (-> i ::accounts/tx-amount :currency keyword)
::money/value (-> i ::accounts/tx-amount :value)}#2020-05-2920:14arohnerI’m trying to insert :accounts/account, in the same transaction as the tx-items. Inserting tx-items fails with
:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "Unable to resolve entity: [:griffin.proc.accounts/account-id #uuid \"17af261f-9ad5-58e6-938f-b3b7a0ffee22\"] in datom [-9223301668109421343 :griffin.proc.accounts/account [:griffin.proc.accounts/account-id #uuid \"17af261f-9ad5-58e6-938f-b3b7a0ffee22\"]]"
#2020-05-2920:16marshallYou cant use the lookup ref#2020-05-2920:16marshallYou need to use the tempid#2020-05-2920:16marshallIf youre creating the entity in the same transaction#2020-05-2920:17marshallCreate the account with a db/id "foo"#2020-05-2920:17marshallAnd "foo" in place of your lookup ref#2020-05-2920:19arohnerRight. That’s not convenient because it requires me knowing whether the entity already exists or not, which requires an extra query#2020-05-2920:19marshallNot if you have the account entity in the same txn#2020-05-2920:22marshall[{:account/id "someuniquevalue"
:db/id "foo"}
{:transaction/value 20
:transaction/account "foo"}]#2020-05-2920:22marshallif account/id “someuniquevalue” exists, it will upsert#2020-05-2920:22marshallif not it will create#2020-05-2920:22marshalleither way, the txn with value 20 will have a ref attr pointing to that account#2020-05-2920:23arohnerIt’s been several years since I used datomic in anger. At the time, the advice was don’t assert facts unnecessarily. Won’t that create new datoms every time, even if the account already exists?#2020-05-2920:24marshallno#2020-05-2920:24marshalldatomic does redundancy elimination#2020-05-2920:24marshallif the acct entity exists it will upsert#2020-05-2920:24marshallif it doesnt it will be created#2020-05-2920:25marshallany attr/val pairs that already exist for that entity will be eliminated if the value is identical#2020-05-2920:25marshallif the value is different it will retract the old value and assert the new value#2020-05-2920:25marshallif the attr is not present at all on that entity it will assert the attr/value for that entity#2020-05-2920:27marshallnot sure where “dont assert facts unnecessarily” would come from
certainly doing the work of redundancy elimination has some cost, but i would not expect it to be prohibitive, especially in this case, as you have to “find” the account entity either way, whether it’s via the entity being asserted or with the lookupref#2020-05-2920:30marshalla completely redundant txn would create a tx/Instant datom#2020-05-2920:30marshallso if everrything you assert is duplicate you’d be accumulating an “unnecessary” couple of datoms#2020-05-2920:31marshallwhich again, not a big deal as long as you arent doing it in huge numbers#2020-05-2920:31marshalli.e. here and there totally nbd
every single minute, 10 times a minute all the time… maybe not so great#2020-05-2920:37arohnerThat’s good to know#2020-05-2920:37arohnerThe rest of the transaction definitely has to happen and will have novelty, so it sounds like nbd#2020-05-2920:59marshall👍#2020-05-2921:28kschltzHi there, We're currently using datomic cloud and I've been stuck with the following:
We have several source applications providing financial data, each one with its own payload, so we decided to have a 'normalizer' service
to convert each format to a common payload, so we can build our products in an agnostic manner.
To illustrate this:
;;{:source.a/name "John Doe" would become something like-> {:common/name "John Doe"
:source.a/amount 44.50} :common/amount 4450
:common/source {:source.a/name "John Doe"
:source.a/amount 44.50}}
We chose to keep the original format in the final structure to maintain some backtracking and ease integration with legacy systems.
Now, say there is a buggy implementation in this conversion function rounding floats or any other error, and we end up with incorrect values
in the common payload, but we still have the original data. Does datomic have any support for me to bulk 'alter' that data?
The first solution that came to my mind, was to query all the incorrect data, extract the source info, pass it through the correct function, then transact it back
to datomic. But I wonder, does Datomic has any feature to better support that, something closer to a "compare and swap-like" feature?
Thanks to you all, patient readers 😄#2020-05-2921:47marshallDatomic has compare and set: https://docs.datomic.com/cloud/transactions/transaction-functions.html#db-cas#2020-05-2921:47marshallYou'd need to handle the bulk nature yourself.
Also, if your entities are cardinality one, you could just reassert all the values#2020-05-2921:48marshallOnes that were the same would be unchanged (redundancy elimination)#2020-05-2921:48marshallOnes that differ would be "upserted"#2020-05-2921:49marshallIf the attributes are cardinality many youd need to retract them explicitly#2020-05-2921:49marshall@schultzkaue ^#2020-05-2921:50kschltzthanks a lot#2020-05-3019:49Drew VerleeWouldn't it be safer to just create a new set of data? e.g common.v2/name?#2020-05-3020:22Drew VerleeI take it the trade off in having unique identities is that they require more book keeping by the database?#2020-05-3120:14Drew VerleeI expected the following query to return [[2] [2]] instead of [[4]] because the with clause should have grouped by list first:
(d/q '[:find (count ?todo)
:with ?l
:where
[?l :list/todo ?todo]
]
db)#2020-06-0100:14Drew Verleei don't know who to give this feedback to but https://docs.datomic.com/cloud/operation/upgrading.html#know-your-version
really needs to switch the order of instructions around so that "storage and compute upgrade" is first.#2020-06-0119:52Drew VerleeWhats an ideal way to do schema discover on a large database?#2020-06-0121:03alidlorenzothe ion starter has an example of querying the schema of a database. not sure if that’s what you’re asking about, but I’ll post link in case it is: https://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter.clj#L49-L56#2020-06-0120:49ghadi"discover"?#2020-06-0213:44Drew VerleeThe simipliest form of discover is to just get all the db idents and do a string filter on them. I'm not sure it gets better then that. The get-schema function alid linked seems to have some hints in it.#2020-06-0213:50ghadiin Datomic, attribute definitions are themselves entities#2020-06-0314:00Drew Verleemakes sense thanks.#2020-06-0213:51ghadiyou can query them normally:
:find ?attribute
[_ :db/ident ?attribute]
#2020-06-0213:51ghadiand you can also augment them with your own information#2020-06-0213:51ghadithat datomic doesn't know about#2020-06-0213:53ghadiso you can add attributes like :drewverlee.attribute/relates-to#2020-06-0213:53ghadito relate one attribute to another#2020-06-0213:54ghadior to assist making schema html documentation#2020-06-0312:23robert-stuttafordhowdy @marshall, just following up on that question of AMI configuration 🙂
also, i have a new question, about DynamoDB and garbage collection. we GC once a week, for a-month-ago-or-older as the manual suggests.
our ddb table is 220gb+ big. a fresh Datomic restore is ~35gb. should we just do a table hopscotch every so often? what is all that extra data, if not the stuff the Datomic GC process would catch?#2020-06-0316:11marshall@robert-stuttaford I’ll try to get something for the AMI config today or tomorrow, sorry for the delay
The additional storage is likely unrecoverable garbage, which can be generated by failed indexing jobs and/or transactor failovers during indexing. Yes, the easiest way to deal with it is to restore your DB into a fresh table#2020-06-0316:33robert-stuttafordthank you @marshall - that's helpful!#2020-06-0413:51joshkhwhile using a library that implements serialization, i found that query results (which appear to be PersistentVector) throw an .NotSerializableException: datomic.client.impl.shared.Db exception, where as a usual Clojure PersistentVector does not. calling vec on the results of a query solves the problem, but i'm wondering if there's a better way to solve this#2020-06-0414:04favilaI think that’s the right way. The result object, especially from the client sync apis, are a bit magical#2020-06-0414:04favilamany of them are lazily realized#2020-06-0414:05favilaif you use :keys in a query, that is a datatable-like object#2020-06-0414:05favilaetc#2020-06-0414:06favilaeven in the peer api this is true. queries may return ArrayList instead of vector for efficiency. d/datoms and friends return a reified thing that implements seq and iterable#2020-06-0414:07favilaso in general the apis only guarantee the interfaces and behavior of return values, not type#2020-06-0415:20marshall@robert-stuttaford ^^#2020-06-0514:43YasHello Guys, does anyone able to restore datomic database into postgres?#2020-06-0516:19faviladatomic on-prem backups are storage-system agnostic. You can restore a backup from any kind of storage to any other kind#2020-06-0619:01David PhamI am trying to understand the pricing of Datomic on prem. it costs 5k$/year/system. How do you define the number of systems? Is it the number of writer?#2020-06-0721:48jdhollisI’ll defer to @marshall, but I suspect a system is the combination of compute + storage. For Datomic Cloud, this easily maps to the CloudFormation stacks involved. I’m not sure how that plays out for on-prem.
You will only have one transactor (i.e., “writer”) at any time for each system (though you can have more than one running for fail-over).#2020-06-0802:58Alex Miller (Clojure team)Yes, for on-prem a system will have one active transactor (may also have an HA transactor)#2020-06-0805:10David PhamThanks!#2020-06-0811:11joshkhi can't believe i'm asking this, but are there any future plans for a nodejs compatible datomic cloud client? i'm just thinking about speedy lambdas that can't really afford the cold startup time of the JVM. i guess there's always graalvm, but still, the ease of whipping up and deploying a cljs lambda is attractive.#2020-06-0811:11joshkhi can't believe i'm asking this, but are there any future plans for a nodejs compatible datomic cloud client? i'm just thinking about speedy lambdas that can't really afford the cold startup time of the JVM. i guess there's always graalvm, but still, the ease of whipping up and deploying a cljs lambda is attractive.#2020-06-0815:58jdhollisYou could also just use ions with HTTP Direct.#2020-06-0815:58jdhollis(I’m assuming you’re wiring these up to an API Gateway.)#2020-06-0815:59jdhollisThey stay spun up.#2020-06-0816:00jdhollisYou have to handle routing within the ion, but it has a significant (positive) response time impact.#2020-06-0816:46joshkhsounds promising but i don't quite follow. HTTP direct lets me route api gateway traffic directly to datomic, but i don't see how that lets me query datomic from a cljs (nodejs) lambda which is my goal 🙂#2020-06-0816:49joshkhit looks like on-prem has a REST api, maybe cloud has something similar?#2020-06-0817:19jdhollisWhat’s your Lambda hooked up to?#2020-06-0817:19jdhollisTypically, I only worry about cold starts if it’s user-facing.#2020-06-0817:20jdhollis(Though I suppose there’s a Rube Goldberg version that hits a private API Gateway endpoint proxying directly to an ion.)#2020-06-0820:30joshkh^ yeah, i entertained the idea 😉 i don't have a specific use case, but let's say something like an API Gateway Authorizer, or an authentication lambda hooked in to Cognito, both of which are customer facing#2020-06-0821:15csmNot too long ago I went and made a nodejs cloud/peer server project: https://github.com/csm/datomic-client-js not official, but it does work with cloud and peer server#2020-06-0821:25jdhollisNeat.#2020-06-0821:27jdhollisAlas, not a lot of good options there if the Lambda is low traffic.#2020-06-0821:27jdhollisEven the Lambdas created to proxy to ions use the JVM if I’m not mistaken.#2020-06-0821:28jdhollisThe API Gateway version might be the best option, latency-wise 😛#2020-06-1012:12joshkh@UFQT3VCF8 this is fantastic, thank you for sharing! i'm curious - why did you write the library in JS and not CLJS?#2020-06-0813:11arohnerIs there a way to assert a query uses an index?#2020-06-0821:07colinkahnI’m trying to understand the terminology in datomic for “peer”. Is the Peer api and peer server similar in some way or does peer just have two meanings?#2020-06-0821:07colinkahnI’m trying to understand the terminology in datomic for “peer”. Is the Peer api and peer server similar in some way or does peer just have two meanings?#2020-06-0822:03favilaThe peer api is the api used to become a peer. A peer server is a peer that provides the server half of the client api#2020-06-0822:04favila“peer” means roughly “member of the datomic cluster”. They have direct connections to transactor and storage#2020-06-0822:06favilahttps://docs.datomic.com/on-prem/architecture.html#peers#2020-06-0915:39colinkahn@U09R86PA4 thanks, I think it makes sense now. Peer and peer server is the full Datomic api with caching etc, where Client is just an interface that connects to the peer server.#2020-06-0916:57favilacorrect. Although a bit of nuance: the client api is designed to be possible to use from a non-peer process, but in certain circumstances for performance it can actually run in a peer and use that peer’s resources directly#2020-06-0916:57favilaI think this is what ions do#2020-06-0916:58favilathey use the client api, but ion processes are also peers so the client api is implemented to call directly into the peer api without crossing a process boundary#2020-06-0919:35colinkahnInteresting, but this is some custom thing that is happening? I was curious if you could use the connection from the peer api with a client, but the apis didn’t seem compatible, with Client requiring an endpoint. But there was a :server-type :local which I couldn’t find docs on that made me wonder#2020-06-0920:21favilathere are three implementations I know about: client (use a transit connection to a server, shared by peer-server and cloud, used by ion if running outside the cloud), peer-client, and local (used by ion if the ion is running in the cloud)#2020-06-0920:21favilapeer-client looks like it would be for on-prem peers; local is for cloud “peers”#2020-06-0920:22favilabut I’ve never seen the implementations for either one, and I don’t think they’re directly supported#2020-06-1009:50katoxHi, we might need to transfer an existing datomic cloud system to a new aws account. What is the best way to handle that?#2020-06-1009:50katoxHi, we might need to transfer an existing datomic cloud system to a new aws account. What is the best way to handle that?#2020-06-1022:47kennyWhen deploying a Datomic HTTP direct endpoint, the final step in creating an API gateway says:
> Enter your `http://$(NLB URI):port/{proxy}` as the "Endpoint URL". This NLB URI can be found in the Outputs tab of your compute or query group https://console.aws.amazon.com/cloudformation/home#/stacks under the "LoadBalancerHttpDirectEndpoint" key
The value in my CF Outputs tab is formatted like this "http://entry.my-datomic-system-name.us-west-2.datomic.net:8184". If I were to follow the docs exactly, I would end up with a Endpoint URL that looks like this: http://http://entry.my-datomic-system-name.us-west-2.datomic.net:8184/{proxy}. I'm assuming that is not what the docs wanted, correct? #2020-06-1022:55Joe Lane@kenny You can watch the video tutorial on http direct for more clear instructions. Definitely don't do http://http://...#2020-06-1022:55Joe Lane@kenny You can watch the video tutorial on http direct for more clear instructions. Definitely don't do http://http://...#2020-06-1022:56kennyI didn't 🙂 Following the docs verbatim would lead to that URL. Surprised it wasn't caught. Not really a huge fan of video docs...#2020-06-1022:55Joe Lanehttps://docs.datomic.com/cloud/livetutorial/http-direct.html#2020-06-1022:59kennyAny idea why all calls to a Datomic http direct endpoint result in a 500 with this response?
{
"message": "Internal server error"
}
The API gateway logs end with a very unhelpful error.
Execution failed due to configuration error: There was an internal error while executing your request
#2020-06-1022:59kennyAny idea why all calls to a Datomic http direct endpoint result in a 500 with this response?
{
"message": "Internal server error"
}
The API gateway logs end with a very unhelpful error.
Execution failed due to configuration error: There was an internal error while executing your request
#2020-06-1022:59kennyI don't think the request is even hitting Datomic.#2020-06-1023:02Joe LaneDid you watch the Video Tutorial?#2020-06-1023:03kennyIs there really info embedded in a video tutorial that is not in the textual docs?#2020-06-1023:03Joe LaneThe error message is telling you that you misconfigured it.#2020-06-1023:04Joe LaneI've seen that error before when I was working on creating an http-direct deployment and, in fact, I did misconfigure it. You're totally right that it isn't hitting datomic.#2020-06-1023:09kennyWatched the video. I have followed the steps exactly & still get the 500 🤔#2020-06-1023:17kennyFull example logs:
Execution log for request 6c5c6e23-0d88-4ac3-a160-374fc3842a83
Wed Jun 10 23:11:17 UTC 2020 : Starting execution for request: 6c5c6e23-0d88-4ac3-a160-374fc3842a83
Wed Jun 10 23:11:17 UTC 2020 : HTTP Method: POST, Resource Path: /datomic
Wed Jun 10 23:11:17 UTC 2020 : Method request path: {proxy=datomic}
Wed Jun 10 23:11:17 UTC 2020 : Method request query string: {}
Wed Jun 10 23:11:17 UTC 2020 : Method request headers: {}
Wed Jun 10 23:11:17 UTC 2020 : Method request body before transformations:
Wed Jun 10 23:11:17 UTC 2020 : Endpoint request URI:
Wed Jun 10 23:11:17 UTC 2020 : Endpoint request headers: {x-amzn-apigateway-api-id=eq2azct4a2, User-Agent=AmazonAPIGateway_eq2azct4a2, Host=}
Wed Jun 10 23:11:17 UTC 2020 : Endpoint request body after transformations:
Wed Jun 10 23:11:17 UTC 2020 : Sending request to
Wed Jun 10 23:11:17 UTC 2020 : Execution failed due to configuration error: There was an internal error while executing your request
Wed Jun 10 23:11:17 UTC 2020 : Method completed with status: 500
Everything appears correct. I wish aws had a bit more info as to what "configuration" could be causing this error.#2020-06-1023:21kennyThis is a uname deployment. I assume that can't matter though.#2020-06-1023:59kennyI can either https://docs.datomic.com/cloud/ions/ions-tutorial.html#orgef4cfed OR https://docs.datomic.com/cloud/ions/ions-tutorial.html#http-direct, right? I don't need to do the former to do the latter?#2020-06-1100:01marshallCorrect. Did you set up a vpc gateway?#2020-06-1100:01marshallVpc link#2020-06-1100:01kennyYes#2020-06-1100:01kennyAnd it is available.#2020-06-1100:01marshallWhat address did you usr#2020-06-1100:01marshallUse#2020-06-1100:01kennyWhere do I provide an address?#2020-06-1100:02kennyThe Endpoint URL is http://entry.datomic-prod-v2.us-west-2.datomic.net:8184/{proxy}#2020-06-1100:02marshallOr rather did you choose the correct nlb#2020-06-1100:02marshallAnd is this latest version of datomic#2020-06-1100:02kennyThere is only 1 production topology deployed in this account & Datomic is the only service using NLB.#2020-06-1100:04kennyOoo, it is not the latest. It is a version that should have http direct support. It's on 616 8879.#2020-06-1100:05kennyWill try updating to the latest version to see if that helps.#2020-06-1100:07marshallyeah, that should have it#2020-06-1100:07marshallfor sure#2020-06-1100:07marshallwhat’s in your ion-config ? @kenny
you need to have a valid :http-direct key in there for Datomic to start the http direct listener#2020-06-1100:08kenny{:allow [],
:lambdas
{:query-pricing-api
{:fn cs.ions.pricing-api/lambda-handler,
:description "Query the pricing-api.",
:concurrency-limit 100}},
:http-direct {:handler-fn cs.ions.pricing-api/web-handler},
:app-name "datomic-prod-v2"}#2020-06-1100:11marshallcan you look for "IonHttpDirectStarted" in your Cloudwatch logstream#2020-06-1100:11marshallfor the production system#2020-06-1100:12kennySure. I think it's not there. Not super familiar with CloudWatch Logs.#2020-06-1100:12marshallinclude the double quotes#2020-06-1100:13kennySame.#2020-06-1100:13marshallare you sure your system is actually starting up?#2020-06-1100:13kennyThis application has been deployed, if that's your next question 🙂#2020-06-1100:13kennyThe latest deployment did not fail.#2020-06-1100:13marshalli.e. can you connect via the bastion#2020-06-1100:13marshallwith a repl or whatever#2020-06-1100:15kennyYep, I can get a client back.#2020-06-1100:15kenny& call list-databases.#2020-06-1100:16kennyUpdated to 668 8927 and still getting the same 500.#2020-06-1100:21kennyNew deploy was successful & same error.#2020-06-1205:16David PhamDoes Datomic Free holds all the database in memory or on disks?#2020-06-1205:16David PhamDoes Datomic Free holds all the database in memory or on disks?#2020-06-1215:00micahHas anyone done a large number of excisions and hose their transactor? I initiated the excision of 1.6M entities. It looks like the transactor acknowledged about 800k of the transactions before I got fed up of waiting and restarted it. Now it just can’t seem to recover. The entities remain un-excised and it can’t seem to complete an indexing job, I think.#2020-06-1217:12JAtkinsAnyone had issues with datomic port forwarding? My team has members in the us, uganda, and india. The datomic instance is in us-east-2 (ohio), and the team members in india and uganda are frequently getting timeouts...#2020-06-1913:13tvaughanI'm in South America and I've seen my traffic throttled by some sites apparently because filtering by geolocation is such an effective way to mitigate against malicious requests. /s#2020-06-1301:19Jon WalchI see the perms listed here for admins: https://docs.datomic.com/cloud/operation/access-control.html#org98dd40a#2020-06-1301:19Jon WalchAre there perms listed anywhere for just the client application? I tried:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3::REDACTED/*"
]
}
]
}#2020-06-1301:21Jon WalchAnd I'm getting:
{:what :uncaught-exception, :exception #error {
:cause Forbidden to read keyfile at . Make sure that your endpoint is correct, and that your ambient AWS credentials allow you to GetObject on the keyfile.
:data {:cognitect.anomalies/category :cognitect.anomalies/forbidden, :cognitect.anomalies/message Forbidden to read keyfile at . Make sure that your endpoint is correct, and that your ambient AWS credentials allow you to GetObject on the keyfile.}#2020-06-1301:22Jon WalchIf I try to pull the same creds from a pod running in my EKS cluster using the awscli, it works.#2020-06-1301:26Jon Walchhttps://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-minimum-sdk.html anyone know which version of the AWS SDK cognitect is using?#2020-06-1301:28Jon WalchLooks like Update to version 1.11.479 of the AWS SDK for Java. which is below the min version to support#2020-06-1302:29Alex Miller (Clojure team)The aws api is not using an sdk at all, it talks through the rest api#2020-06-1413:56craftybonesHello. Just about beginning to play with datomic. I have a specific need and I think datomic fits the bill, I was wondering if anyone could give me pointers. I need to build a dossier system of sorts where there are notes and other details maintained for several candidates. As this detail changes through time, it would help for us to have historic information visible about each candidate#2020-06-1413:57craftybonesIn Mongo, this would be a series of records, timestamped, but all really containing mostly the same information#2020-06-1413:59craftybonesSo given that I want to maintain a history and given that the schema can be flexible, Datomic sounds right, am I correct in assuming this?#2020-06-1414:01craftybonesso let us say, I make a series of assertions on :notes , then later on, I’d like to look at :notes, not just as what the latest is, but all the :notes accrued over time#2020-06-1414:01craftybonesThis should be (trivially?) possible right?#2020-06-1414:01craftybonesThis should be (trivially?) possible right?#2020-06-1414:19marshallhttps://stackoverflow.com/questions/48898046/datomic-query-over-history#2020-06-1414:20marshallhttps://augustl.com/blog/2013/querying_datomic_for_history_of_entity/#2020-06-1414:20marshallhttps://docs.datomic.com/cloud/tutorial/history.html#2020-06-1414:41craftybonesThanks#2020-06-1414:59val_waeselynck@U8VE0UBBR as a non-official source on Datomic, I advise against using Datomic's historical features for giving users access to revisions: https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html
Datomic is not a bitemporal database. A priori, I recommend making one entity per note revision, as you would do with a regular database.#2020-06-1415:04craftybonesSo you suggest an additional attribute that records the version as well, as opposed to relying just on timestamps#2020-06-1415:04craftybonesI see what you are saying here, history changes are fine as long as the shape remains the same#2020-06-1415:04craftybonesthe second the shape changes#2020-06-1415:05craftybonesthat becomes more complex#2020-06-1415:05craftybonesThanks @U06GS6P1N#2020-06-1415:05craftybonesHowever, even with what you are saying, its easier to use datomic here isn’t it, given the use case?#2020-06-1415:09val_waeselynckDatomic can be easier to use, for general reasons not related to modeling revisions, such as flexible schema, expressive reads and writes, ease of data sync, etc.#2020-06-1415:10craftybonesAlright. In this case, I have a very specific need of having to look at history#2020-06-1415:11val_waeselynckYou may be fine just storing one entity per revision or per change, if your queries aren't highly sophisticated.#2020-06-1415:12val_waeselynckOtherwise might want to look at bitemporal dbs like Crux, but there are many other aspects to consider than historical query features.#2020-06-1415:17craftybonesAs of now, I just want a history of notes per person let us say#2020-06-1415:28craftybonesso let us say some attribute was added only at tx 200, what is the cost to me as a developer if I query for that attribute in an earlier transaction?#2020-06-1415:35val_waeselynckThe main question for assessing Datomic against such use cases is how complicated your historical queries are#2020-06-1415:36craftybonesPretty much no branching, straight ahead, give me everything you’ve got on person x, at most limited by a specific duration#2020-06-1415:37craftybonesAssuming incredibly low performance needs, never more than a handful of users at any point#2020-06-1415:38craftybonesFrom what I am reading of the schema change, I could potentially backfill necessary data, which might not even be necessary for certain types of attributes#2020-06-1512:46favilaWell you can’t backfill such that it looks as if it was transacted in the past. That is the limitation of relying on datomic history for revisions#2020-06-1512:47favilaDatomic history is more like (immutable, not branching) git history than like time-series records#2020-06-1421:53Drew VerleeHow does using a predicate directly in the query (https://docs.datomic.com/cloud/query/query-data-reference.html#predicates) compare to querying the data then performing the predicate? I assume the predicate runs somehow before the join?#2020-06-1421:53Drew VerleeHow does using a predicate directly in the query (https://docs.datomic.com/cloud/query/query-data-reference.html#predicates) compare to querying the data then performing the predicate? I assume the predicate runs somehow before the join?#2020-06-1422:21Drew Verleethe answer is in the docs:
> The predicates =, !=, <=, <, >, and >= are special, in that they take direct advantage of Datomic's AVET index. This makes them much more efficient than equivalent formulations using ordinary predicates. For example, the "artists whose name starts with 'Q'" query shown above is much more efficient than an equivalent version using starts-with?#2020-06-1422:23Drew Verleeerrr. wait < works on strings to compare the first two letters?
;; fast -- uses AVET index
[(<= "Q" ?name)]
[(< ?name "R")]
;; slower -- must consider every value of ?name
[(clojure.string/starts-with? ?name "Q")]
That seems really odd#2020-06-1422:30Drew Verleewhat does it mean "they take advantage of datomics AVET index" does that mean the comparison is done using information in the index as well? like when we say index im thinking
"alice"
"bob"
"zack"
so using the index in the context of (< ?name "d") would mean that zack is returned and the operation do this never actual had to look at the string zack because i was stored in location that was marked like "d-z" or something.#2020-06-1512:37favilaIt means it can figure out the equivalent d/index-range call#2020-06-1512:38favila(It’s not literally d/index-range but the semantics are the same)#2020-06-1512:41favila If the query planner can see the attribute you are using, know it has an avet index, and see the comparisons and their values it can figure out a subset of the values in the index to seek instead of seeking the whole thing#2020-06-1512:58Drew Verleefor 'on-prem' i understand you have to add an avet index for an attribute by doing a transaction. i did a search through my cloud db and i don't see a db/index attribute.
Do i need to add avet indexs for the queries that use index e.g d/index-range to work? and if so, how?#2020-06-1513:44favilacloud adds value indexes for everything already#2020-06-1513:45favilahttps://docs.datomic.com/cloud/query/raw-index-access.html#indexes#2020-06-1513:51Drew Verleeawesome, thanks!#2020-06-1521:29JAtkinsAny idea why I would not be able to resolve dependencies on com.datomic/ion? I have the maven repo added , and my default aws user credentials are tied to a datomic admin policy.#2020-06-1523:23Lone RangerI'm positive I've had this issue before and solved it but I can't remember what the issue was. Running a peer on a docker container and running into some issues:
[main] INFO search.config - Dockerization detected:true
[main] INFO search.config - Using host: 172.17.0.1
[main] INFO search.config - datomic:
[main] INFO datomic.domain - {:event :cache/create, :cache-bytes 2086666240, :pid 660, :tid 1}
[main] INFO datomic.process-monitor - {:event :metrics/initializing, :metricsCallback clojure.core/identity, :phase :begin, :pid 660, :tid 1}
[main] INFO datomic.process-monitor - {:event :metrics/initializing, :metricsCallback clojure.core/identity, :msec 0.865, :phase :end, :pid 660, :tid 1}
[main] INFO datomic.process-monitor - {:metrics/started clojure.core/identity, :pid 660, :tid 1}
[clojure-agent-send-off-pool-0] INFO datomic.process-monitor - {:AvailableMB 3880.0, :ObjectCacheCount 0, :event :metrics, :pid 660, :tid 13}
[clojure-agent-send-off-pool-0] INFO datomic.kv-cluster - {:event :kv-cluster/get-pod, :pod-key "pod-catalog", :phase :begin, :pid 660, :tid 13}
[clojure-agent-send-off-pool-0] INFO datomic.kv-cluster - {:event :kv-cluster/get-pod, :pod-key "pod-catalog", :msec 30.6, :phase :end, :pid 660, :tid 13}
[main] INFO datomic.peer - {:event :peer/connect-transactor, :host "localhost", :alt-host "172.17.0.1", :port 4334, :version "1.0.6165", :pid 660, :tid 1}
Execution error (ActiveMQNotConnectedException) at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl/createSessionFactory (ServerLocatorImpl.java:787).
AMQ119007: Cannot connect to server(s). Tried with all available servers.
The AMQ thing is ... I can't figure out what the best approach to tackle that is#2020-06-1523:27Lone Rangerah ok#2020-06-1523:28Lone Rangerhttps://docs.datomic.com/on-prem/deployment.html#peers-fail-connect-txor#2020-06-1523:28Lone RangerI thought I remembered this before#2020-06-1606:05craftybonesHello#2020-06-1606:05craftybonesWhat am I missing here?
user=> (d/q '[:find ?genre
#_=> :where [_ :movie/genre ?genre]] db)
[["Drama, Action"] ["Drama"] ["Sci Fi"]]
user=> (d/q '[:find ?e ?a
#_=> :where [(fulltext $ :movie/genre "Drama") [[?e ?a _ _]]]] db)
#2020-06-1606:06craftybonesBased on what the manual says, this ought to work.#2020-06-1606:06craftybonesI’ve even tried a parameterised variety and didn’t get it to work. I am sure I am doing something stupid, just don’t know what it is#2020-06-1611:31faviladoes :movie/genre have a fulltext index?#2020-06-1611:32favilaThis is the query syntax: https://lucene.apache.org/core/2_9_4/queryparsersyntax.html#2020-06-1611:32favila(`fulltext` passes it straight down to Lucene)#2020-06-1612:05craftybones😄 That was it. Thanks#2020-06-1608:22craftybonesAnybody?#2020-06-1609:11raspasov@srijayanth Try “Drama*” maybe?#2020-06-1609:11raspasovDrama*#2020-06-1610:08dmarjenburghWe have a lambda ion handler that gets invoked daily with a CloudWatch event. It never gave problems, but since this last night it throws this exception:
No implementation of method: :->bbuf of protocol: #'datomic.ion.lambda.dispatcher/ToBbuf found for class: clojure.lang.PersistentArrayMap: datomic.ion.lambda.handler.exceptions.Incorrect
clojure.lang.ExceptionInfo: No implementation of method: :->bbuf of protocol: #'datomic.ion.lambda.dispatcher/ToBbuf found for class: clojure.lang.PersistentArrayMap {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "No implementation of method: :->bbuf of protocol: #'datomic.ion.lambda.dispatcher/ToBbuf found for class: clojure.lang.PersistentArrayMap"}
at datomic.ion.lambda.handler$throw_anomaly.invokeStatic(handler.clj:24)
at datomic.ion.lambda.handler$throw_anomaly.invoke(handler.clj:20)
at datomic.ion.lambda.handler.Handler.on_anomaly(handler.clj:171)
at datomic.ion.lambda.handler.Handler.handle_request(handler.clj:196)
at datomic.ion.lambda.handler$fn__3841$G__3766__3846.invoke(handler.clj:67)
at datomic.ion.lambda.handler$fn__3841$G__3765__3852.invoke(handler.clj:67)
at clojure.lang.Var.invoke(Var.java:399)
at datomic.ion.lambda.handler.Thunk.handleRequest(Thunk.java:35)#2020-06-1610:11dmarjenburghNvm, I found that it's the return value from the server handler#2020-06-1610:21craftybones@raspasov - that didn’t work! 😞#2020-06-1611:48Lone Rangerdoes anyone have any experience running a dockerized peer server or peer application? There seems to be some networking requirement that I'm not fully understanding#2020-06-1611:49favilaports 4334 and 4335 must be open#2020-06-1611:50Lone Rangeroutbound or inbound?#2020-06-1611:50favilathe host= or alt-host= in the transactor properties file must name the transactor and be resolveable by peers#2020-06-1611:50favila(actually 4335 is only for dev storage)#2020-06-1611:51faviladatomic peer connections work like this:#2020-06-1611:51favilatransactor writes its own hostname to storage and sets up an artemismq cluster#2020-06-1611:51Lone Rangergotcha. So it needs to be able to hit 172.17.0.1:4334, in my case?#2020-06-1611:51favilathen peers connect to storage, lookup the transactor name, and connect to the transactor on 4334#2020-06-1611:51Lone Rangerah ok, keep going#2020-06-1611:52favilaso they need whatever the txor writes to storage to be resolveable to the transctor in whatever network they are in#2020-06-1611:52favila(4334 is for artemis)#2020-06-1611:53Lone Rangerinteresting ... digesting#2020-06-1611:54Lone Rangerand it's a one way connection?#2020-06-1611:54favilalooks like from your logs that the peer can find storage, but either 172.17.0.1 doesn’t resolve to the txor, or it’s not allowed to connect to it, or the destination port isn’t open#2020-06-1611:54Lone Rangerthere's no inbound from AMQ?#2020-06-1611:54favilathere’s inbound data, but the txor doesn’t actively connect to peers#2020-06-1611:55Lone Rangergotcha. hmm okay thank you. This gives me what I need to work on the puzzle#2020-06-1611:55favilabtw why is “localhost” an option?#2020-06-1611:56favilais that for connecting from outside docker?#2020-06-1612:02Lone Rangersorry back#2020-06-1612:03Lone RangerI developed it locally and now I'm attempting to dockerize it#2020-06-1612:03Lone RangerI was considering converting the app code to client process but I'd be back in the same boat with the peer server needing to be dockerized#2020-06-1612:11Lone Rangerinteresting -- found this and I'm not seeing any extra exposed ports: https://github.com/frericksm/docker-datomic-peer-server#2020-06-1612:13favilait exposes 9001#2020-06-1612:13favilathis is just the peer server#2020-06-1612:13favilano transactor#2020-06-1612:15favilayou confirmed that 4334 is exposed on the transactor?#2020-06-1612:27Lone RangerI'm trying to dockerize my peer application code, not the transactor#2020-06-1612:27Lone Rangerso using this peer server as inspiration -- sorry for the confusion#2020-06-1612:27Lone Rangertransactor is on my host machine right now#2020-06-1612:27Lone Rangerpeer application code is on the docker#2020-06-1612:28Lone RangerOH IT DOES EXPOSE 9001 !!! great catch#2020-06-1612:29Lone RangerAlso I notice it is going with "0.0.0.0" instead of the docker bridge network, interesting#2020-06-1612:29Lone Rangergives me some stuff to play around with, great eye.#2020-06-1612:43favila9001 is the client api port#2020-06-1612:44favilathat is the service the peer-server is exposing#2020-06-1612:44favilabut your problem is your peer can’t find or can’t talk to your transactor#2020-06-1612:44favilaswitching to client-server won’t fix that#2020-06-1613:09Lone Rangerwell worst case scenario I can rewrite the code with client API and try to talk to the peer server instead#2020-06-1613:12favilabut, the peer-server is a peer#2020-06-1613:12favilait is a peer, which implements the server half of the client api#2020-06-1613:13favilahave you tried this peer-server docker image and gotten it to connect to your transactor?#2020-06-1613:14favilawait a sec, I think I know your problem#2020-06-1613:14favilayour transactor is binding to localhost#2020-06-1613:15favilait needs to bind to something the docker network layer can route to#2020-06-1613:15favilatry host=0.0.0.0 as a first step#2020-06-1613:16favilait may not let you do that; if not, use the container’s IP#2020-06-1613:42Lone Rangergood thinking#2020-06-1613:42Lone RangerI'll give that a shot#2020-06-1613:46Lone Rangerinteresting, new error anyway#2020-06-1613:46Lone Ranger#2020-06-1613:46Lone Rangersorry about formatting 😕#2020-06-1613:47Lone Rangernow it's clearly psql that's pissed#2020-06-1614:02Lone Rangeryeah this is interesting, postgres wants a DIFFERENT host identifier than the transactor does. Transactor is connecting on 0.0.0.0, but postgres wants the docker bridge:#2020-06-1614:02Lone Ranger#2020-06-1614:14favilatransactor’s host and storage host are different concepts#2020-06-1614:15favilaI think you are misunderstanding something. this error shows you got even less far along#2020-06-1614:15favilayou didn’t even manage to connnect to postgres this time#2020-06-1614:17favilayou are setting up three things: 1) postgres, exposes 5432, needs routable hostname 2) transactor, exposes 4334, needs routable hostname, needs to connect to postgres. 3) peer; needs to connect to postgres, needs to connect to transactor#2020-06-1614:19favilatransactor.properties host= is what the transactor binds to for port 4334 (for peers to connect to it). Both host= and alt-host= are written to postgres for peers to discover#2020-06-1614:19favilaalt-host= is an alternative host/ip in case there’s some networking topology where what the transactor binds to isn’t the same thing other peers should connect to#2020-06-1913:20tvaughanIf you're running on a single machine, all you need to do is 1) name the running containers, 2) use this name as the host name when connecting to a running container, and 3) run all containers on the same bridge network. Ports don't need to be exposed explicitly if they're only accessed by other containers on the same bridge network. All services should bind to 0.0.0.0 , not localhost. For example, we start the peer server like $DATOMIC_RELEASE/bin/run -m datomic.peer-server -h 0.0.0.0 -p 8998 -a "$DATOMIC_ACCESS_KEY_ID","$DATOMIC_SECRET_ACCESS_KEY" -d $DATOMIC_DATABASE_NAME,datomic:mem://$DATOMIC_DATABASE_NAME#2020-06-2417:44Lone RangerAwesome, thank you! I'll be sure to use this when we transition to client API#2020-06-1612:34Lone RangerWhat is the significance of port 9001 with regards to the peer server? Would this also apply to peer application code?#2020-06-1612:45favila9001 is the port that client api clients connect to.#2020-06-1612:47Lone Rangerinteresting. So this should not apply to peer application code? 🤔#2020-06-1612:47favilano. BTW that port is configurable. 9001 is just the one that docker image you were looking at happened to use#2020-06-1612:47Lone Rangeraha#2020-06-1612:47favilalook at line 25#2020-06-1612:48favilahttps://github.com/frericksm/docker-datomic-peer-server/blob/master/Dockerfile#L25#2020-06-1612:48Lone Rangerahhh#2020-06-1612:48Lone Rangerhrrrm#2020-06-1614:18Lone Rangerokay creeping closer.
.properties file modifications:
alt-host=172.17.0.1
host=0.0.0.0
protocol=sql
#host=localhost
port=4334
peer connection string from docker:
datomic:
[main] ERROR org.apache.activemq.artemis.core.client - AMQ214016: Failed to create netty connection
java.net.UnknownHostException: 172.17.0.1
at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797)
#2020-06-1614:22favilajust to verify--that is indeed the ip address of the transactor?#2020-06-1614:23favilaI notice it’s the same as postgres.#2020-06-1614:33Lone Rangerhmm good question. well this is what the transactor says:#2020-06-1614:33Lone Ranger$ ./run-transactor.sh
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc driver ...
System started datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc driver
#2020-06-1614:34Lone Rangerdatomic:sql://<DB-NAME>?jdbc:
#2020-06-1614:35Lone Ranger(postgres and the transactor are on the same host right now)#2020-06-1614:35Lone Ranger(but in the future they could be on different hosts)#2020-06-1614:37favilaare they inside or outside your docker network?#2020-06-1614:38Lone Rangerhmm the transactor is outside, the postgres is also running in a docker probably on the default network#2020-06-1614:38favilaso, your peer can connect to postgres, but not to the transactor#2020-06-1614:39favilaI don’t understand this part though .UnknownHostException: 172.17.0.1#2020-06-1614:40favilaif the peer could connect to postgres on 172.17.0.1 to get the transactor IP, how is that an unknown host?#2020-06-1614:41favilaare you absolutely sure this is what the peer used? (d/connect "datomic:<sql://search?jdbc:postgresql://172.17.0.1:5432/datomic?user=datomic&password=datomic>")` ?#2020-06-1614:50Lone Rangergood question#2020-06-1614:50Lone RangerI'll hardcode it just to be sure#2020-06-1614:51Lone Ranger(ns search.config
(:require [clojure.tools.logging :as log]))
;; todo -- parameterize
(def dockerized? (System/getenv "DOCKERIZED"))
(log/info (str "Dockerization detected:" dockerized?))
(def db-host (if dockerized?
"172.17.0.1"
"0.0.0.0"))
(log/info (str "Using host: " db-host))
(def db-uri (str "datomic:" db-host ":5432/datomic?user=datomic&password=datomic"))
(log/info (str db-uri))
(def db-version "0.6")
is technically what it's doing#2020-06-1614:52Lone Rangerah, no wait, this is the same error facepalm#2020-06-1614:53Lone Ranger#2020-06-1614:59favilaagain, this indicates that the peer could talk to postgres but not the transactor#2020-06-1614:59favilaabsolutely baffled how the same IP could be fine and also unknown host#2020-06-1615:00favilait just used that IP to talk to postgres, and got the host and alt-host info from there#2020-06-1615:32Lone Rangeryeah, it's nuts. It's def a networking thing, when I use the --network=host option on the Docker everything works fine#2020-06-1615:33Lone Rangerah I see so the observation here indicates that the peer process looked up the location of the transactor in storage (the host/alt-host settings) and then attempted to use those to connect to the transactor?#2020-06-1615:34Lone Rangeris there a way to "ping" the transactor?#2020-06-1615:34Lone Rangerbecause then I could test what the host is supposed to be from the Docker perspective#2020-06-1615:41Lone Rangerah okay, I have a new hypthesis.
docker_gwbridge: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:b5:8e:77:df txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2302 bytes 377328 (377.3 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
that's the docker bridge (as seen from host)#2020-06-1615:41Lone Rangerpostgres is dockerized, the peer service is dockerized ... the transactor is not#2020-06-1615:42Lone Rangerin my mind it makes sense then what you said about the baffling behavior of the transactor failing when it just talked to postgres to get the information#2020-06-1615:42Lone Rangerso perhaps if I dockerize the transactor it will alleviate this issue#2020-06-1617:15favilathat’s not my understanding of “unknownhosterror”#2020-06-1617:15favilabut maybe it is just a communication thing#2020-06-1617:15Lone RangerI'm writing up the current state of affaris#2020-06-1617:18favilare “ping” if you just want to test reachability from various machines use nc -z hostname port#2020-06-1617:18favilait will print if it could establish a tcp connection then terminate#2020-06-1617:18Lone RangerEither there is a missing piece or I just need to learn more about networking/docker.
Current setup:
• transactor on docker0
• postgres on docker1
• peer application on docker2
observations:
tranasctor can connect to psql
{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "1765787863577378747c726527"}, :content ("[email protected]")}
peer application can connect to psql:
{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "b0c2dfdfc4f0d384d3838289898781d1d587"}, :content ("[email protected]")}
current error message from peer application:
[main] INFO search.config - Dockerization detected:true
[main] INFO search.config - Using host: 172.17.0.1
[main] INFO search.config - datomic:
[main] INFO datomic.domain - {:event :cache/create, :cache-bytes 2086666240, :pid 6575, :tid 1}
[main] INFO datomic.process-monitor - {:event :metrics/initializing, :metricsCallback clojure.core/identity, :phase :begin, :pid 6575, :tid 1}
[main] INFO datomic.process-monitor - {:event :metrics/initializing, :metricsCallback clojure.core/identity, :msec 0.47, :phase :end, :pid 6575, :tid 1}
[main] INFO datomic.process-monitor - {:metrics/started clojure.core/identity, :pid 6575, :tid 1}
[clojure-agent-send-off-pool-0] INFO datomic.process-monitor - {:AvailableMB 3880.0, :ObjectCacheCount 0, :event :metrics, :pid 6575, :tid 13}
[clojure-agent-send-off-pool-0] INFO datomic.kv-cluster - {:event :kv-cluster/get-pod, :pod-key "pod-catalog", :phase :begin, :pid 6575, :tid 13}
[clojure-agent-send-off-pool-0] INFO datomic.kv-cluster - {:event :kv-cluster/get-pod, :pod-key "pod-catalog", :msec 8.89, :phase :end, :pid 6575, :tid 13}
[main] INFO datomic.peer - {:event :peer/connect-transactor, :host "172.17.0.1", :alt-host nil, :port 4334, :version "1.0.6165", :pid 6575, :tid 1}
Execution error (ActiveMQNotConnectedException) at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl/createSessionFactory (ServerLocatorImpl.java:787).
AMQ119007: Cannot connect to server(s). Tried with all available servers.
transactor config:
host=172.17.0.1
#host=0.0.0.0
protocol=sql
#host=localhost
port=4334
sql-url=jdbc:
sql-user=datomic
sql-password=datomic
#2020-06-1617:19Lone RangerAs I write that up I think the host might be wrong. host is supposed to be host of transactor, not sql, yeah?#2020-06-1617:22Lone Rangerthat was it 🙂 @favila you are THE MAN!! thank you!!!#2020-06-1617:22Lone RangerI should probably do a blog thing about this for the benefit of others cause this was a little tricky#2020-06-1617:22Lone RangerI should probably do anything productive with my time at all 😓#2020-06-1614:19Lone Rangerso that's a new error, the UnknownHostException#2020-06-1618:05Lone Ranger@favila if there was an equivalent of upvotes or reddit gold for this slack channel I'd be throwing them at you, thank you#2020-06-1622:38jacksonIs qseq not available in datomic.api as suggested here?
https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/qseq#2020-06-1622:42jacksonSpecifically the peer api for 0.9.6045. And I have the same issue with index-pull.#2020-06-1623:14favilaThis fn is brand new and only available on the latest version (yours is not) https://docs.datomic.com/on-prem/changes.html#2020-06-1700:36jacksonNot sure how I missed that release, thanks!#2020-06-1622:44zhuxun2Earlier I asked a question about EQL in #fulcro but I figured it might concerns datomic users as well so I felt compelled to drop it here as well. Apologize for the long post. 🙏
--- original post ---
I have a question about the design of EQL which I'm not sure this is the right place to discuss. However, I feel EQL is something fulcro users deal with heavily so I think it might be valuable for me to hear your opinions:
There are couple of quirks that made me wonder why EQL was designed like it is. Let's take a look at a typical EQL -- the kind of which probably show up in your app hundreds of times
[:user/id
:user/name
:user/email
{:user/projects [:project/id
:project/name
:project/start-date
:project/end-date
:project/completed?]}
{:user/contacts [:user/id]}]
EQL claims to have "the same" shape as the returned data. That's awesome! However, why don't we go a step further? Consider the return value of the above query:
{:user/id #uuid "..."
:user/name "Fred Mertz"
:user/email "
Why wasn't EQL designed to completely mimic that structure:
{:user/id _
:user/name _
:user/email _
:user/projects [{:project/id _
:project/name _
:project/start-date _
:project/end-date _
:project/completed? _}]
:user/contacts [{:user/id _}]}
Or, if the explicitness of pluarity is not desired here:
{:user/id _
:user/name _
:user/email _
:user/projects {:project/id _
:project/name _
:project/start-date _
:project/end-date _
:project/completed? _}
:user/contacts {:user/id _}}
The immediate benefit is that now I can use the map namespace syntax to make it much more succinct and DRY (and easy on the eye):
#:user{:id _
:name _
:email _
:projects #:project{:id _
:name _
:start-date _
:end-date _
:comleted? _}
:contacts #:user{:id _}}#2020-06-1622:44zhuxun2IMHO many important semantics are much better aligned this way. For example, in a return value, the order of the keys in a level of map should not matter, and there should not be duplicated keys. However, EQL uses an ordered collection (vector) to denote the keys, the semantics of which has a sense of order while ensures no uniqueness. Also, it feels like in EQL maps are used in place of pairs. I understand that Clojure doesn't have a built-in
literal for pairs so it makes sense to use maps, but maps seem to be a poor fit for this role -- here they are only allowed to have one key-value pair, and pushes the key into the next level when it should really belong to the outer level. I feel that the ad-hoc-ish design not only misses mathematical simplicity but also make everything unnecessarily complex. If I were to write a function to retrive all root level keys given a EQL (which should have been trivial), the implementation would be a few lines unnecessarily longer since I need to consider those ref-type attributes. If I am using Emacs to manually write a test example given an EQL, I am doing lots of unnecessary work changing brackets into braces and splicing sexp's.
That being said, my exposure to Clojure and Datomic/Pathom/Fulcro is limited, and I truly want to hear if there are reasons why EQL was designed the way it is rather than my intuitive version. I apologize my above arguments spiralled into a small rant.#2020-06-1622:44zhuxun2IMHO many important semantics are much better aligned this way. For example, in a return value, the order of the keys in a level of map should not matter, and there should not be duplicated keys. However, EQL uses an ordered collection (vector) to denote the keys, the semantics of which has a sense of order while ensures no uniqueness. Also, it feels like in EQL maps are used in place of pairs. I understand that Clojure doesn't have a built-in
literal for pairs so it makes sense to use maps, but maps seem to be a poor fit for this role -- here they are only allowed to have one key-value pair, and pushes the key into the next level when it should really belong to the outer level. I feel that the ad-hoc-ish design not only misses mathematical simplicity but also make everything unnecessarily complex. If I were to write a function to retrive all root level keys given a EQL (which should have been trivial), the implementation would be a few lines unnecessarily longer since I need to consider those ref-type attributes. If I am using Emacs to manually write a test example given an EQL, I am doing lots of unnecessary work changing brackets into braces and splicing sexp's.
That being said, my exposure to Clojure and Datomic/Pathom/Fulcro is limited, and I truly want to hear if there are reasons why EQL was designed the way it is rather than my intuitive version. I apologize my above arguments spiralled into a small rant.#2020-06-1623:20favilaNot a complete answer but some historical context: eql is based on pull expressions, which existed prior to the namespaces map features you mention#2020-06-1623:20favilaSo to some degree this is historical accident.#2020-06-1623:21favilaAlso the symmetry of your proposal breaks down once you consider parameters#2020-06-1623:22favilaKey renaming, limits, defaults, etc. some of that you can smuggle into the value slot, but you need a plan for nested maps#2020-06-1700:56souenzzoI dont think that the pull notation is import and do not think that one is better then other. We can have many notation/representations and talk about the same AST.#2020-06-1705:36zhuxun2@favila I can support params the same way the current EQL does -- on the key slot, no?
#:user{(:id {:with "params"}) _
:name _
:email _
(:projects {:with "params"}) #:project{:id _
:name _
:start-date _
:end-date _
:comleted? _}
:contacts #:user{:id _}}
#2020-06-1705:38zhuxun2Sure I lifted the keys this way, but when I start doing params, the query is already so much deviated from regular static data that it becomes a DSL, so I don't care the structural simplicity as much#2020-06-1709:36souenzzohttps://gist.github.com/souenzzo/c1fcc19c0ed1ac08f3902fb6ed80eb7a#2020-06-1709:36souenzzoalso, "eql query notation" isn't the same of "datomic selector notation"
https://github.com/souenzzo/eql-datomic/#2020-06-1709:50souenzzoin other words: it's OK to have many "query languages". each one has it's own benefits/facilities. all then can talk about the same AST#2020-06-1715:31Ramon RiosHello everyone#2020-06-1715:34Ramon Rios{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "50223f3f2410316234366569686436603635"}, :content ("[email protected]")}
I'm following datomic tutorial and getting this error. Is there because i'm using free instead or pro version?#2020-06-1717:12marshallyes, peer-server is not included in datomic=free#2020-06-1807:43Ramon RiosShoot, how should i start with free version instead?#2020-06-1811:30dazlddo you need persistence, or just want to play around?#2020-06-1811:31dazldusing an in-memory db is the easiest way to start playing with it, i think#2020-06-1812:31souenzzo(d/connect "datomic:") 🙂#2020-06-1812:33Ramon RiosI need to play around. Will use in a project and i want to have a hands on experience#2020-06-1812:33Ramon RiosThank you all : )#2020-06-1813:19joshkhfollowing up on a question i asked the other day, i am trying to pass the result of a query through a serialization library, and i am having trouble making sense of an error. upon def'ing the result i can see that it is a clojure.lang.PersistentVector
(def result (d/q ... db)
=> #'my-ns/result
(type result)
=> clojure.lang.PersistentVector
and a postwalk through the data structure shows only core java/clojure classes
(clojure.lang.PersistentVector
clojure.lang.PersistentHashMap
clojure.lang.MapEntry
clojure.lang.Keyword
clojure.lang.PersistentArrayMap
java.lang.String
java.lang.Boolean)
however, when i pass result to the serialization library, i get a NotSerializableException for datomic.client.impl.shared.Db.
(sp/set bc "testkey" 120 result)
Execution error (NotSerializableException)
at java.io.ObjectOutputStream/writeObject0 (ObjectOutputStream.java:1185).
datomic.client.impl.shared.Db
how is the datomic.client.impl.shared.Db class related to the result of the query?#2020-06-1813:23favilaMaybe metadata? Maybe your walking isn’t looking at every object?#2020-06-1813:24favilawhat is your query and result? have you tried bisecting the result?#2020-06-1813:26joshkheverything works as expected if i copy and paste the contents of result back in to the repl, so there's definitely something going on with the object itself#2020-06-1813:26favilathat sounds like metadata. does your serializer serialize metadata?#2020-06-1813:26joshkhit does indeed. and the query result vector does have a nav protocol:
(meta result)
=>
#:clojure.core.protocols{nav #object[clojure.core$partial$fn__5839 0x613ebeb5 "
#2020-06-1813:27favilaTIL#2020-06-1813:28joshkhalso, i wasn't able to remove the metadata from result 🤔#2020-06-1813:28favilahow so?#2020-06-1813:29joshkh(meta (vary-meta result dissoc clojure.core.protocols/nav))
=>
#:clojure.core.protocols{nav #object[clojure.core$partial$fn__5839 0x613ebeb5 "
#2020-06-1813:29favilait’s a keyword not a symbol#2020-06-1813:29favilawhy not (with-meta result nil)?#2020-06-1813:30joshkhyes, why not is a good question. thanks for the tip. 😉#2020-06-1813:31favilathere could still be metadata on nested objects. I didn’t know the client lib made results navigable and I don’t know how it works#2020-06-1813:32favilathis seems like something better solved in your serializer if possible#2020-06-1813:32favilacan it be customized or operate in meta/non-meta preserving modes?#2020-06-1813:34joshkhfavila, once again, thanks for your help. i shrugged off the metadata earlier when i saw only the nav protocol. but sure enough stripping it away solved the problem#2020-06-1813:34joshkh(top level metadata, that is)#2020-06-1813:34joshkhi actually need support for my own metadata so this works for me#2020-06-1814:23favila“only the nav protocol” these are always live objects (functions) so I don’t expect them to be serializable ever#2020-06-1815:00joshkhagreed, and i did find that while stripping the top level metadata worked in my one example, other query results had nested metadata that could not be serialised (as you suspected). for now it is a hobby project, so a simple postwalk to remove all metadata works with the least amount of effort, but i will explore the serialization library for a more solid solution.#2020-06-1814:30ivanahello, can anyone explain me what I have to do to make this query work?
{:find '[[?e ...]]
:in '[$ ?date-from ?date-to]
:args [db date-from date-to]
:where '[[?e :logistic/driver ?d]
[?e :logistic/delivery-date ?date]
(not [?d :driver/external? true])
[(get-else $ ?e :logistic/completed #inst "2000") ?completed-date]
(or
(and [?e :logistic/state :logistic.state/incomplete]
[(<= ?completed-date ?date-to)])
[?e :logistic/state :logistic.state/active]
(and (or [?e :logistic/state :logistic.state/completed]
[?e :logistic/state :logistic.state/failed])
[(<= ?date-from ?completed-date)]
[(<= ?completed-date ?date-to)]))]}#2020-06-1814:31ivanathe error is on or clause Assert failed: All clauses in 'or' must use same set of vars#2020-06-1815:05ivanaMoved all the and/or clauses to external function, and it works. I have no idea about theirs magic inside datomic#2020-06-1815:11favilathis has to do with whether a var should be unified with the outside of the rule or not#2020-06-1815:11favilayou can control this by using or-join and and-join instead and being explicit#2020-06-1815:13favilainvisibly, or is creating a rule, and each rule must unify to the same set of vars outside the rule#2020-06-1815:13favilaif you don’t specify the vars, it looks inside the rule to determine it#2020-06-1815:13favilayou’ll notice each clause of your or uses a different, non-overlapping set of vars#2020-06-1815:19ivanaThanks. But it seems too complicated for me, looks like much simplier is to use an external function with predictable behavior...#2020-06-1816:34Drew VerleeIs there a way to get datomic change log updates via some notification?#2020-06-1816:35marshall@drewverlee you can subscribe to the Announcements topic on the datomic forum#2020-06-1816:35marshallhttp://forum.datomic.com#2020-06-1816:38Drew Verleegreat thanks!#2020-06-1913:05arohnerWhat is the idiomatic way to express
(d/q '[:find [?name ...] :in $ :where ...])
? I’m getting Only find-rel elements are allowed in client find-spec, see #2020-06-1913:12marshall@arohner you need to use the find-rel spec only: :find ?name :in ...#2020-06-1913:12marshallmanipulating the collection after it is returned can then be handled in your client app code#2020-06-1913:13arohnerRight. I’m asking if there’s a datomic client alternative to [?name …]#2020-06-1913:13arohnerOk, sounds like there’s no alternative#2020-06-1913:14souenzzo(map first (d/q '[]))
Or
'[:find ?name :keys :name ....] @arohner#2020-06-1913:21arohnerThanks#2020-06-2014:05erikwould it be stupid to think of Datomic not of as a DB but a durable message queue system, with the added benefit of providing an event sourced DB implemened using covering indexes?#2020-06-2016:10Linus EricssonI think thats a fair description. However, you will want to use separate datomic databases for events vs business data at some point, but yes. Please make them consider it. Saves a lot of hassle.#2020-06-2014:56erikI'm asking because I'm trying convince my team mates to use Datomic instead of Kafka/NATS#2020-06-2015:40Joe Lanehttps://vvvvalvalval.github.io/posts/2018-11-12-datomic-event-sourcing-without-the-hassle.html @eallik this may be useful#2020-06-2015:40Joe LaneAlso, what is NATS?#2020-06-2018:40Drew Verleegiven i added a pure function to my :allow list in the draomic/ion-config.edn and the ion push and deploy were successful i would expect to be able to use that funtion as an :xform, however i get an error when i try that says its not allowed in the ion-config. How can i double check the allowed functions?#2020-06-2019:35alidlorenzoWhat’s the value-add of entity predicates (and tx functions more generally), compared to defining constraints in the function that’s initiating the transaction?
ex of entity predicate: (d/transact conn {:tx-data [(merge {:db/ensure :user/validate} user-data)]})
ex of regular function: (if user-valid (d/transact conn {:tx-data [user-data]}))
I was hoping entity predicates would ensure certain data never entered database unless it was valid. But if they must be explicitly called, what else do they add compared to regular function validations?#2020-06-2020:46favilasafety from read and write skew#2020-06-2020:47favilatransaction functions read the before-transaction value of the db and can abort if they see a constraint violation#2020-06-2020:48favilaentity and attribute predicates see the after-transaction value of the db (after all datom expansion, right before final commit) and can abort.#2020-06-2020:49favilachecking before issuing the transaction is seeing a value of the db which may not be the most recent value by the time the transaction command reaches the transaction writer#2020-06-2020:49favilaso these are three different moments in a transaction lifecycle#2020-06-2020:49favilathe absolute safest thing is entity and attribute predicates#2020-06-2020:49alidlorenzooh ok, it does make a difference then. this lifecycle wasn’t as clear in docs, so thanks for explaining#2020-06-2020:50alidlorenzo*cloud docs at least#2020-06-2201:16alidlorenzo@U09R86PA4 if entity predicates see the after-transaction value, then how can you use them to validate new entities?
i.e. if I want to validate a new user’s username/email do not exists, I need to do that before the transaction, otherwise an existing user’s data could be upserted#2020-06-2201:18alidlorenzoi guess transaction funtions can be used for new entity data, and user predicates to check transactions on existing entities; though it does feel odd that these two use-cases would be segmented like that, am I missing something?#2020-06-2201:33favilaDon’t use an upserting attribute if you care about the difference between a create and update operation#2020-06-2201:35favilaPredicates check that the db is in a valid state and can abort if not. Their value is that they don’t know or care about what operations got the db into that state. They can abstractly say “these conditions must always hold”#2020-06-2201:35favilaThat’s also why they can’t alter the result, only accept or abort#2020-06-2201:38alidlorenzowhat do you mean by an upserting attribute?
for example, i can run this transaction twice, first time it creates user, second time it upserts it
(d/transact conn
{:tx-data [{:user/username "admin"
:user/email "
#2020-06-2201:39favilaIt will only upsert if one of those attrs is marked as upserting#2020-06-2201:39favilaDb.index/identity#2020-06-2201:40alidlorenzoah ok, that must be the bit i’m missing. I’ll look more into that in docs; thanks!#2020-06-2201:40favilaOtherwise it will create a new entity or abort if marked db.index/unique#2020-06-2201:46alidlorenzo^^ yea, I’ve been doing :db.unique/identity` instead of :db.unique/value - the former causes upserts 😅#2020-06-2201:59favilaReally it’s safer to always name the entity you are intending to manipulate. Use db/id with a lookup ref for update, tempid for create#2020-06-2020:24erik@lanejo01 yes, aware of that. but event sourcing is not quite the same as what Kafka does... Kafka is event streaming#2020-06-2021:25alidlorenzofor those that reinstall schema every time on startup, how do you handle cardinality many attributes?
specifically, changing attribute specs
e.g. if I change my entity’s :db.attr/preds from 'db.attr-preds/foo1 to db.attr-preds/foo2
my initial assumption was that reinstalling the schema would replace its predicate attribute, but because predicates are cardinality many the new one is added on.
is this a case when reinstalling schema every time stops working, or are there workarounds?#2020-06-2206:32steveb8nQ: I’d like to hear peoples tip/tricks for tuning queries from prod metrics. I know I’m going to need this so I’ll start with what I’m planning and what I wish existing….#2020-06-2206:33steveb8nto start I’m gonna keep a metric for every distinct query run, time to execute, number of entities returned etc. this should give me a good signal for poorly tuned queries#2020-06-2206:34steveb8nSince bad queries are often due to poorly ordered where clauses, I wonder if there is a way to include total number of entities scanned? comparing this to number returned would be a really strong signal#2020-06-2206:35steveb8nany other tricks?#2020-06-2208:16steveb8nI’ve also been pondering an auto-tune capability. if you took every query and ran it with one :where clause at a time and sorted by the result counts, that should give the best where ordering for prod data distribution. only problem is these queries would consume massive CPU so would need a separate query group.#2020-06-2211:19Joe Lane@U0510KXTU
1. Make a wrapper namespace for the query namespace and add the instrumentation (timing, cardinality, etc) there. I've seen projects which ship up to honeycomb using a homegrown lib, but the concept is generic. I HIGHLY RECOMMEND creating some notion of trace-id chaining, either via a hierarchy or some other means (e.g. request creates trace-id 123, which is the parent of trace-id 234 created by a query and then a sibling trace-id 345 is made to do some side-effect, all with their own instrumentation numbers that can also be rolled up). It's extremely valuable to see the whole lifecycle of a request, including all the queries and external calls it performs (datomic or otherwise)
2. I think I remember you being on cloud, so another thing to think about is client-cloud vs ions. They each have different tradeoffs but with ions you get the locality advantage.
3. I don't know of any way to include the number of entities scanned other than re-running the query a few times building up the clauses and magically knowing how to replace the :find clause args with a call to count . That being said, if your queries are fairly static (vs dynamically computed from a client request) you could probably build a tool to accomplish this. (d/db-stats db) is your friend here. Also, there is this tool which may be sufficient, or, at least a great starting point for your "auto-tuner".
4. Try to avoid using massive pull-patterns in your :find clauses. Pull's do joins like :where clauses do, but can have subtle and confusing performance semantics, especially when the data cardinalities change out from under you (like in high traffic production environments).
5. Look at some of the new query tools in the latest release such as qseq and :xform in pull.
Those are the first 5 off the top of my head, LMK if you want to go deeper on any of them.#2020-06-2222:44steveb8nThanks @U0CJ19XAM this is good stuff. I already have a middleware layer in front of all db calls. it’s a successor to https://github.com/stevebuik/ns-clone#2020-06-2222:45steveb8nI also use x-ray so I have the some of the tools you mention in place. reading that article has given me some ideas though.#2020-06-2222:46steveb8nultimately, it’s exceptional queries I want to see, not all of them. so my “signal” vs noise is what I’m currently focused on.#2020-06-2222:46steveb8nI didn’t know about the large pull behaviour. I am doing this so I’ll dig deeper there. Thanks.#2020-06-2213:20Joe LaneRelevant https://www.honeycomb.io/blog/so-you-want-to-build-an-observability-tool/#2020-06-2216:57Lone Rangerdoes anyone know if mariadb is a supported persistence solution?#2020-06-2217:02ghadi@goomba https://docs.datomic.com/on-prem/storage.html#sql-database#2020-06-2217:02ghadiyes#2020-06-2217:02ghadi> If you want to use a different SQL server, you'll need to mimic the table and schema from one of the included databases. #2020-06-2217:02ghadiThe mysql one should work for maria, I think#2020-06-2217:04Lone Rangermuch appreciated 🙇#2020-06-2217:06Lone Rangerand I suppose that the sql-url and sql-driver-class attributes in the config will inform the transactor of the correct jar to load, and that I should do the same for the peer?#2020-06-2217:12ghadinot sure, but I think if you have the correct url & the jar on the classpath, it will auto discover the correct class#2020-06-2305:49jwkoelewijnHi, we are running an on-prem Datomic installation, with two memcached servers. My question is, are these 2 memcached servers redundant? In other words, could I take one offline, upgrade the underlying machine and bring it back online without any hickups?#2020-06-2311:50favilaDonno about hiccups, but the memcached servers are not replicas: each has half the segments#2020-06-2311:52favilaBut this is really two different questions#2020-06-2311:52favilaWhat you really want to know is how will peers behave when a memcached becomes unreachable#2020-06-2311:55favilaI know everything will still work but I’m unsure if there will be extra blocking timeouts added to peer work or not.#2020-06-2312:56marshallthere will not be blocking timeouts#2020-06-2312:56marshallthe memcache response timeout is very short#2020-06-2312:57marshallif it doesn’t return within that very short window the peer will go to storage instead#2020-06-2407:15jwkoelewijnThanks a lot for the explanations! helped in my understanding!#2020-06-2319:01erikwhat is the client API equivalent of subscriptions/HTTP SSE?#2020-06-2415:17Lone Rangerok this is a really odd question but ... apparently the storage service they want me to use has a READ port and a WRITE port. I've never seen this before. I'm curious if anyone has any thoughts on how this might be accomplished#2020-06-2415:22favilaBy “WRITE” do you mean “READ+WRITE”?#2020-06-2415:23favilapeers only need read; transactor needs read+write#2020-06-2415:23favilawhat kind of storage is this?#2020-06-2415:27Lone Rangercorrect on the READ+WRITE#2020-06-2415:27Lone Rangermariadb on something called a galera cluster#2020-06-2415:27Lone Rangerdoes that just mean I pass the read string to the peers?#2020-06-2415:29Lone Rangerso if I understand this correctly, I pass the "read uri" to the peer. Peer looks up in storage location of transactor. It hands off transactions to transactor and does queries from read? Is that the theory?#2020-06-2415:32favilacorrect. peers do not write#2020-06-2415:33favilaonly the transactor writes#2020-06-2415:33Lone Rangergot it. So it won't cause a conflict that I pass in a different string than the transactor suggests on startup?#2020-06-2415:33Lone Rangerjust making sure it doesn't do some kind of validation#2020-06-2415:33favilano#2020-06-2415:33Lone Rangerlovely#2020-06-2415:33favilayou should understand why there are different ports though#2020-06-2415:34favilais this purely an access-control thing, or do they have different consistency guarantees between them? is the other port a read replica?#2020-06-2415:54Lone Rangerconsistency guarantees#2020-06-2415:54Lone Rangerreplica, yes. Something about performance#2020-06-2415:54Lone Rangerpolite thing to do would be to respect the setup, they said, so wanted to see if I could accomodate#2020-06-2416:27favilawhat I mean is, could the peer and transactor ever read different things using different ports?#2020-06-2416:28favilatransactor does a commit, and then informs peers. The peers need to be able to read what the transactor wrote#2020-06-2417:03Lone RangerI see. Needs to be strongly consistent?#2020-06-2417:05Lone RangerI'm not sure -- checking#2020-06-2417:29Lone Ranger"about 1 second" propagation time#2020-06-2417:59Lone RangerOkay based on what I'm seeing here: https://docs.datomic.com/on-prem/architecture.html#2020-06-2418:00Lone RangerIt looks like updates are sent directly from the transactor to the peer#2020-06-2418:00Lone Rangerso as long as it's sending the actual data and not, say, a lookup value then we should be fine#2020-06-2418:32favilaI am not a datomic dev, but it seems like at least some things must be lookups sometimes#2020-06-2418:32favilae.g., when a new index is finished#2020-06-2418:52Lone Rangergotcha#2020-06-2415:18Lone RangerI'm thinking if I had a peer service that only did upserts and another that only did reads, I could pass a write uri-string to the upsert service and a read uri-string to the query service, while having the transactor sit on the "write node".#2020-06-2415:18Lone RangerDoes that pass a sniff test...?#2020-06-2415:28rolandHello, I saw that there is :reverse option in the index-pull function. Is there also an option to run through the index in reverse order using the datoms function ?#2020-06-2517:50tatutI see datomic cloud accepts frequencies as an aggregate if :find but I don’t see it documented, is that supported or some undefined behaviour? EDIT: doesn’t seem to return what I’d expect… weird that it’s accepted#2020-06-2517:50tatutI see datomic cloud accepts frequencies as an aggregate if :find but I don’t see it documented, is that supported or some undefined behaviour? EDIT: doesn’t seem to return what I’d expect… weird that it’s accepted#2020-06-2518:11favilaI think you are accidentally using it as a custom aggregate: https://docs.datomic.com/on-prem/query.html#custom-aggregates#2020-06-2518:12favilaI think no-namespace symbols are interpreted as in clojure.core, so it happens to work#2020-06-2518:13tatutI haven’t seen anything in cloud docs about custom aggregates#2020-06-2518:19favilahttps://docs.datomic.com/cloud/query/query-data-reference.html#deploying#2020-06-2518:19favilabecause it’s cloud (query is running on a different process than yours) there’s usually stuff you need to do to expose your fn to the query code#2020-06-2518:20favilabut again, I think because it’s clojure core it just accidentally works#2020-06-2518:20favilaI can also use datomic.api functions in a client query when using peer-server#2020-06-2518:21tatutbut it didn’t work… it’s not returning what frequencies should#2020-06-2518:21favilawhat is it returning?#2020-06-2518:24tatutit didn’t get all the values I was expecting it to get… I’ll try it out again later if it should work#2020-06-2518:25tatutPerhaps I’m not understanding the cloud docs as they don’t mention the custom aggregates#2020-06-2518:27favilaThis is what I get:#2020-06-2518:28favila(dc/q '[:find (frequencies ?b)
:with ?a
:in $ [[?a ?b]]]
db [[:a :A] [:a :Z] [:b :A] [:b :F]])
=> [[{:F 1, :A 2, :Z 1}]]#2020-06-2518:28favilaseems right?#2020-06-2518:31tatutthat looks right#2020-06-2518:32tatutmy frequencies is only getting one item so the result is always a mapping of {the-one-value 1}#2020-06-2518:34tatutbut thanks for the help, I’ll continue investigating later#2020-06-2518:45favilaif you don’t use :with, that is expected#2020-06-2518:45favilathe result is a set, thus every item occurs once#2020-06-2519:09jeff tanghi! is it possible to retract a reverse-loookup attribute-value? e.g. [:db/retract 100 :_children 200]#2020-06-2519:38favila[:db/retract 200 :children 100]#2020-06-2519:38favilait’s not possible with an underscore attribute#2020-06-2519:38favilayou have to reverse the terms#2020-06-2519:54jeff tangthank you @U09R86PA4#2020-06-2519:38JAtkinsIs it possible to respond to http ion requests with multiple "Set-Cookie" headers?
My response using just ring wrap cookies looks like this:
"Headers": {
"Content-Type": "application/transit+json; charset=utf-8",
"Set-Cookie": [
"jwt-token=eyJraWQiOiJOM0pRej--retracted--sdfw;Path=/;HttpOnly;SameSite=Strict"
]
}
This works in ring since for seqs the header is translated to this:
Content-Type: application/transit+json; charset=utf-8
Set-Cookie: jwt-token=eyJraWQiOiJOM0pRej--retracted--sdfw;Path=/;HttpOnly;SameSite=Strict
However the ion spec only allows maps of string->string to be returned, and there is no way to set multiple cookeis with only one line in the header.#2020-06-2520:20souenzzohttps://github.com/pedestal/pedestal.ions/issues/3#2020-06-2520:28JAtkinsGenius - thanks!#2020-06-2807:32adamtaitThis fix works great on Ions with Solo deploy or via Lambda, but it’s failing for me with http-direct.
{
:status 200
:headers {
"content-type":"application/json",
"Set-cookie":"b=cookie",
"set-cookie":"a=cookie"}
:body "{\"data\": \"stuff\"}"
}
This response from the http-direct handler results in the HTTP response:
< HTTP/2 200
< content-type: application/json
< content-length: 264
< date: Sun, 28 Jun 2020 07:30:54 GMT
< x-amzn-requestid: c3bc8a56-c0bf-41d7-b6b3-24292a2b6509
< x-amzn-remapped-content-length: 264
< set-cookie: b=cookie
< x-amz-apigw-id: O1APLEtPIAMFkGw=
< x-amzn-remapped-server: Jetty(9.4.24.v20191120)
< x-amzn-remapped-date: Sun, 28 Jun 2020 07:30:53 GMT
< x-cache: Miss from cloudfront
… only a single ‘set-cookie’ header when received by the client.#2020-06-2807:39adamtaitI have also tried different variations of multiValueHeaders (which is supported by API Gateway) but the Ions HTTP direct wrapper seems to ignore those.
Would love to hear if anyone else has seen this issue or worked around it (or if it really is a bug)!#2020-06-2811:31souenzzoI do not use or recommended this CaSE sensitive solution. I just join the cookies with ;#2020-06-2901:43adamtait@U2J4FRT2T are you suggesting this?
:headers { "set-cookie": "a=cookie; b=cookie" }
I wasn’t able to find any documentation on combining multiple cookies in the same header but I tried it anyways and found that browsers ignore the 2nd cookie (`b=cookie` in this example).#2020-06-2901:43JAtkinsThat’s part of the browser spec. I tried that at first. A new line is required for every cookie. Maybe a \n is needed?#2020-06-2920:45adamtaitThanks for the idea! I wasn’t able to get \n to work.
I posted the header inconsistency (between :lambdas and :http-direct to the datomic forum). Hopefully someone from the Datomic team will comment.
https://forum.datomic.com/t/inconsistency-between-lambdas-http-direct-multiple-headers/1506#2020-06-2520:41kschltz@pedro.silva#2020-06-2520:46Pedro SilvaHello,
I am executing the split stack process to be able to upgrade our Datomic version as described in:
https://docs.datomic.com/cloud/operation/split-stacks.html#delete-master
After start the delete process in CloudFormation we get an error as you can see in the image.
Someone can help us to solve this problem and be able to continue the process?
Thank you.#2020-06-2600:28souenzzoOne year ago I deployed a datomic-ions stack
After fail 3 times in row I decided to not try updates anymore.
Im really sad to see that they still fail at updates#2020-06-2607:23David Pham
Hello everyone :) in datomic, in the schema, how can you write that a combination of two keys is unique, like id and timestamp? With a tuple?#2020-06-2612:21marshallYep, a tuple: https://blog.datomic.com/2019/06/tuples-and-database-predicates.html#2020-06-2607:24David PhamDoes anyone have some suggestion how to implement an entity containing several timeseries? Or how to model time series?#2020-06-2612:24marshallThe modeling decision here is somewhat up to you, but one option is that each entry in the time series is an individual entity with an ordinal (or time) attr and your "parent" entity has a cardinality many reference to the set of them.#2020-06-2612:24marshallIf the set of time series entries is always small (<= eight) you could use a tuple#2020-06-2612:26marshallIf you never want to introspect the individual entries but will only consume the whole timeseries all together you could also store that data elsewhere in a LOB (like s3) and just store a reference to it in datomic#2020-06-2620:11David PhamThanks a lot!#2020-06-2607:25David PhamI am sorry if it sounds trivial, but I am starting with data script.#2020-06-2719:14niquolaHello, datomic users. Is it common to create datomic schema on fly, when loading unknown data? Any recommendations?#2020-06-2807:35adamtait#2020-06-2818:43zhuxun2Is it true that in Datomic there's not a concept of "not null" as there is in SQL and we just have to assume that every attribute can be missing?#2020-06-2818:48zhuxun2Hmmm.. I guess required attributes is the counterpart I am looking for
https://docs.datomic.com/on-prem/schema.html#required-attributes#2020-06-2904:38raspasov@zhuxun2 there’s also missing? https://docs.datomic.com/on-prem/query.html#missing#2020-06-2916:17joshkhare these two constraints equivalent when finding entities that are missing an attribute?
{:where [[(missing? $ ?n :item/sale?)]]}
{:where [(not [?n :item/sale?])]}
#2020-06-2916:20favilaYeah, pretty much. missing? is a function call, not should be visible to the query planner. I don’t know if the query plan is different in any important way.#2020-06-2916:20favilahistorical note, not came later#2020-06-2916:22favilamissing? probably doesn’t work on datasources which are not databases, but I don’t know that for sure#2020-06-2916:27joshkhcool, and as always thanks#2020-06-3015:36JAtkinsDo datomic ions have api documentation?#2020-06-3016:07kennyNo 😞 The best you'll find is https://docs.datomic.com/cloud/ions/ions-reference.html#2020-06-3016:16JAtkinsI found my answer there, but that is a pain...#2020-06-3016:41Joe LaneWhat kind of "api documentation" did you have in mind?#2020-06-3016:43JAtkinsSomething like a doc string on every function so I don't have to hunt around when reading my code...#2020-06-3016:44kennyAlso the equivalent of this https://docs.datomic.com/client-api/datomic.client.api.html#2020-06-3016:44Joe LaneCan you give me an example? Do you mean on your ions or like what kenny just posted above?#2020-06-3016:47JAtkinsWhat kenny posted. For e.g. the (get-env) function is totally blank for docstings. It would be nice to at least have a link to the reference, better yet a permalink to the configuration section, even better than that a synapsis + a link#2020-06-3016:48Joe LaneThanks for clarifying.#2020-06-3016:50JAtkinsNP. I've just found myself very often in the last week trying to decode my ion setup. It's mostly fine when I'm in the middle of everything, since the docs are up and I remember where to look. But on reviewing the code it becomes much slower.#2020-06-3015:56Richardhi - trying to get existing Datomic setup moved from Docker Swarm to Kubernetes#2020-06-3015:57Richardwondering if there are any articles or blog posts on setting up networking so that the peers can connect to the transactor (all in same Kubernetes cluster)#2020-06-3015:59RichardI found this Kubernetes YAML which suggests you can set the port numbers for transactor: https://clojurians-log.clojureverse.org/datomic/2017-03-19/1489953464.521402#2020-07-0118:09genekimHello! I’m wondering if I can get some help with my Datomic Cloud instance that seems to have gone sound — in fact, I’m on a call with @plexus trying to puzzle this out.
1. I’m getting “channel 2: open failed: connect failed: Connection refused” errors on the proxy, when a Datomic Client tries to access the Datomic Cloud instance.
2. In AWS CloudWatch, I see the following alarm, which occurred very close to when we started seeing Datomic connection errors occurring.
Can anyone propose any recommendations? @plexus, any other data worth sharing? (Sorry, gotta pop off for 30m. Thank you, all!)#2020-07-0118:11genekimError I get from a REPL connection:
Execution error (ExceptionInfo) at datomic.client.impl.cloud/get-s3-auth-path (cloud.clj:178).
Unable to connect to localhost:8182
#2020-07-0118:20plexusreading some more AWS docs it seems we have exceeded the allocated write throughput, which is supposed to only cause throttling, but instead the datomic instance has gone under or become unreachable...#2020-07-0118:23ghadicheck your cloudwatch datomic dashboard#2020-07-0118:23ghadishould have a clear smoking gun#2020-07-0118:24ghadiif you have any Alerts (not just "Events") in that dashboard, look at those too by navigating to cloudwatch logs#2020-07-0118:24genekimThank you @U050ECB92 — is this the dashboard? (Sorry, on a call…. 🙂#2020-07-0118:25ghadiyes, weird that it's mostly empty#2020-07-0118:25ghadiwhat about the bottom half of that dash?#2020-07-0118:28genekimWas empty — full screenshot here:#2020-07-0118:29genekim(Empty dashboard was the reason I was asking Datomic team at Conj 2019 about getting help upgrading last year, which I never got around to.)#2020-07-0118:31marshallthe alarm you posted is irrelevant - that is used by DDB for autoscaling capacity#2020-07-0118:31marshallyou should restart your solo compute instance#2020-07-0118:32marshallyou can just bounce it from the EC2 console#2020-07-0118:32marshall@U6VPZS1EK#2020-07-0118:32marshallhttps://docs.datomic.com/cloud/troubleshooting.html#troubleshooting-solo#2020-07-0118:33marshall#2020-07-0118:33marshall^ solo dashboard should look like that#2020-07-0118:34marshallyour instance and/or JVM got wedged and b/c solo is not an HA system there is nothing to fail-over to#2020-07-0118:34marshallquickest fix is to terminate the instance and let ASG create a new one#2020-07-0118:34genekimRoger that! Will try in 30m as soon as I get off this call! Thx!#2020-07-0118:34marshall👍#2020-07-0119:04genekimPosting this datomic log event, before I destroy the solo instance:
2020-06-25T22:54:28.953-07:00
{
"Msg": "RestartingDaemonException",
"Ex": {
"Via": [
{
"Type": "clojure.lang.ExceptionInfo",
"Message": "Unable to load index root ref bd9b3c36-2912-437d-8fc7-6953ab60a1b2",
"Data": {
"Ret": {},
"DbId": "bd9b3c36-2912-437d-8fc7-6953ab60a1b2"
},
"At": [
"datomic.index$require_ref_map",
"invokeStatic",
"index.clj",
843
]
}
],
"Trace": [
[
"datomic.index$require_ref_map",
"invokeStatic",
"index.clj",
843
],
[
"datomic.index$require_ref_map",
"invoke",
"index.clj",
836
],
[
"datomic.index$require_root_id",
"invokeStatic",
"index.clj",
849
],
[
"datomic.index$require_root_id",
"invoke",
"index.clj",
846
],
[
"datomic.adopter$start_adopter_thread$fn__21647",
"invoke",
"adopter.clj",
67
],
[
"datomic.async$restarting_daemon$fn__10442$fn__10443",
"invoke",
"async.clj",
162
],
[
"datomic.async$restarting_daemon$fn__10442",
"invoke",
"async.clj",
161
],
[
"clojure.core$binding_conveyor_fn$fn__5739",
"invoke",
"core.clj",
2030
],
[
"datomic.async$daemon$fn__10439",
"invoke",
"async.clj",
146
],
[
"clojure.lang.AFn",
"run",
"AFn.java",
22
],
[
"java.lang.Thread",
"run",
"Thread.java",
748
]
],
"Cause": "Unable to load index root ref bd9b3c36-2912-437d-8fc7-6953ab60a1b2",
"Data": {
"Ret": {},
"DbId": "bd9b3c36-2912-437d-8fc7-6953ab60a1b2"
}
},
"Type": "Alert",
"Tid": 6306,
"Timestamp": 1593150867958#2020-07-0119:07genekim#2020-07-0119:23marshallThanks, although that shouldn’t cause a significant issue#2020-07-0119:29genekimOkay, terminated the datomic instance, which didn’t work… terminated the bastion-host instance, which didn’t work…
terminated the datomic proxy script, and restarted… forced some sort of reauthentication, which did work!
Thank you, all!#2020-07-0119:31genekim🙏🙏🙏
🎉🎉🎉#2020-07-0119:51marshallyou’d definitely need to restart the proxy script after restarting the bastion instance#2020-07-0119:52marshallIIRC it regenerates creds/keys after coming back from termination#2020-07-0119:30genekimThank you for the help, all! Described resolution of story at end of thread ^^^.#2020-07-0120:00zhuxun2Is it possible to implement a correct task queue in Datomic? Mostly importantly, ensure that multiple task retrievers won't get the same task from the top of the queue. (In PostgreSQL for example I needed to use LOCK FOR UPDATE)#2020-07-0120:09Joe LaneIt's certainly possible to make a queue out of datomic, but why not just use an actual queue?
I also don't necessarily think it's a good idea to use datomic as a queue, depending on the throughput, failure semantics, and data retention you need.#2020-07-0120:35zhuxun2@U0CJ19XAM Good point. I am looking into https://github.com/Factual/durable-queue as well.#2020-07-0120:35Joe LaneWhy not sqs?#2020-07-0120:46zhuxun2Actually, I just realized a queue might not satisfy what I need. There isn't a static queue. Tasks have priorities and they might be changed dynamically. Every task retriever grabs the top-priority job from the database at the moment it accesses the database. Is there a established solution or pattern for something like that?#2020-07-0120:49Joe LaneDepends on the domain, if this is something for humans (like a Jira / Trello clone) then this is easy. If this is for machines, it depends on your throughput, scale, and failure modes.#2020-07-0120:50Joe LaneThat being said, you may be interested in https://github.com/clojure/data.priority-map#2020-07-0120:51Joe Laneand / or https://github.com/clojure/data.avl/#2020-07-0121:01zhuxun2The job retrievers are machines. I don't think an in-memory solution would work well for my particular case, plus, the tasks and their attributes (from which to compute the priority) are already stored in a datomic database so I that's why I was wondering if there's some sort of locking mechanism between querying and updating...#2020-07-0121:02zhuxun2The performance of the priority sorting isn't that much of a problem, at the moment an index on the priority attribute should work well enough#2020-07-0121:05zhuxun2In other words, is there a way to say "change the first item satisfying my query to have attribute [:task/taken true]" -- all within an atomic transaction#2020-07-0121:08Joe LaneYes, via a transaction function, but I don't think it's going to work out well in the end. What happens once a task is taken but then the task retriever dies? What are your retry policies? How do you distinguish between a slow job and a failed job?#2020-07-0121:11Joe LaneDo you have different levels of prioritization like low, medium and high, or is everything prioritized globally? If you can do the former, I think SQS with a queue per level is likely a better approach.#2020-07-0121:11Joe LaneBecause it handles all these things for you#2020-07-0121:14zhuxun2Thanks. That makes sense. What if I'm not using a standard cloud service? Can Kafka serve a similar purpose?#2020-07-0121:17Joe LaneI'd look at rabbitMQ, kafka is a durable log.#2020-07-0121:18Joe Lane(It could do this as well, but may be more difficult to operate. Again, I know nothing of your problem domain, scale, other constraints, etc. so it's hard to make a good recommendation)#2020-07-0121:22zhuxun2Thanks! I will take a look at rabbitMQ#2020-07-0122:46Lone RangerI don't suppose there is any way to force a peer to use an alternative address than the one provided by the transactor (retrieved from storage), is there?#2020-07-0201:20favilaThe transactor properties file can have host and alt-host. Are two names not enough?#2020-07-0201:22favilaI’m not sure about ports. I dimly recall that you can specify port in the connection string, but that might only be for dev storage#2020-07-0122:47Lone Rangeror alternative port, at least?#2020-07-0122:48Lone Rangeri.e., transactor running on transactorUrl:4334 but is being reverse proxied at with appropriate firewall rules VPN etc etc#2020-07-0123:08Lone Rangerokay looks like we're able to change the port on the LB. still curious if this is possible, tho#2020-07-0212:50kschltzHi, there.
I have a ~3Billion datom database in datomic cloud and the need to add an AVET index seems more than reasonable#2020-07-0212:56faviladatomic cloud already value-indexes everything#2020-07-0212:56kschltzJust realized it#2020-07-0212:56kschltzThanks#2020-07-0212:50kschltzI was wondering how datomic will handle the creation of this new index#2020-07-0212:52kschltzWill it index the existing datoms? If so, would it harmful from a operation standpoint?#2020-07-0212:59marciolHi, We are using Datomic Cloud, executing queries against a Database with approximately 3 Billion Datoms, but a trivial query is taking a long time to return, or it isn’t returning at all, rising timeout exception.
With the query bellow we are trying to return all transactions from a merchant in a range of time, super trivial:
(d/q {:query '[:find (pull ?entity [*]
:in $
:where
[?entity :merchant-id "beb9c7db-a7eb-4e56-8c4e-4db195566562"]
[?entity :transaction-time ?transaction-time]
[?transaction-time :utc-time ?transaction-time-utc]
[(>= ?transaction-time-utc #inst"2020-06-01T03:00:00.000-00:00")]
[(<= ?transaction-time-utc #inst"2020-06-30T03:00:00.000-00:00")])]
:args [(d/db (:conn client))]
:timeout 50000})
We are running in query groups i3.xlarge (with 30.5 GB RAM), and wondering ourselfs if we need to increase these machines.
Can someone with more experience thrown light on this?#2020-07-0213:14Ian Fernandezd/query with pull inside tends to be redundant sometimes, I think that d/pull will get better performance =)#2020-07-0213:20marshallusing pull in query should have the same performance characteristics as a query followed by a pull, except that in the case of using client/cloud using pull in query will save a round trip/wire cost#2020-07-0213:21Ian Fernandezit's datomic cloud w/o ions#2020-07-0213:22Ian FernandezI think d/pull will help in this case#2020-07-0213:22marshallyou should test the time it takes to query just for the entity IDs and how long it takes to pull the attributes of interest#2020-07-0213:22marshallto determine if the pull or the query is taking the majority of the time#2020-07-0213:42Ian Fernandezit can be a problem to use this w/o Ions with too many entities on cloud?#2020-07-0213:46Guilherme PupolinHi @U05120CBV, these are the queries and execution times
First: PULL + 30 days interval = 110578.733438 msecs (14,054 results)
Second: Entity + 30 days interval = 22990.008083 msecs (14,054 results)
(time (def pull-entities (d/q {:query '[:find (pull ?entity [*])
:in $ ?merchant-id ?transaction-time-start ?transaction-time-end
:where
[?entity :merchant-id ?merchant-id]
[?entity :transaction-time ?transaction-time]
[?transaction-time :utc-time ?transaction-time-utc]
[(>= ?transaction-time-utc ?transaction-time-start)]
[(<= ?transaction-time-utc ?transaction-time-end)]]
:args [(d/db (:conn client))
"beb9c7db-a7eb-4e56-8c4e-4db195566562"
#inst"2020-06-01T03:00:00.000-00:00"
#inst"2020-06-30T03:00:00.000-00:00"]
:timeout 50000})))
"Elapsed time: 110578.733438 msecs"
=> #'pgo.commons.datomic-test/pull-entities
(count pull-entities)
=> 14054
(time (def pull-entities (d/q {:query '[:find ?entity
:in $ ?merchant-id ?transaction-time-start ?transaction-time-end
:where
[?entity :merchant-id ?merchant-id]
[?entity :transaction-time ?transaction-time]
[?transaction-time :utc-time ?transaction-time-utc]
[(>= ?transaction-time-utc ?transaction-time-start)]
[(<= ?transaction-time-utc ?transaction-time-end)]]
:args [(d/db (:conn client))
"beb9c7db-a7eb-4e56-8c4e-4db195566562"
#inst"2020-06-01T03:00:00.000-00:00"
#inst"2020-06-30T03:00:00.000-00:00"]
:timeout 50000})))
"Elapsed time: 22990.008083 msecs"
=> #'pgo.commons.datomic-test/pull-entities
(count pull-entities)
=> 14054 #2020-07-0213:54Guilherme PupolinAnd one more case, without filter time:
Third: Entity + without interval = 3768.019134 msecs (17,670 results)
(time (def pull-entities (d/q {:query '[:find ?entity
:in $ ?merchant-id
:where
[?entity :merchant-id ?merchant-id]]
:args [(d/db (:conn client))
"beb9c7db-a7eb-4e56-8c4e-4db19556656"]
:timeout 50000})))
"Elapsed time: 3768.019134 msecs"
=> #'pgo.commons.datomic-test/pull-entities
(count pull-entities)
=> 17670#2020-07-0214:36Joe LaneThis is the time to get the data back to your development computers, right?#2020-07-0214:36Guilherme PupolinRight! @U0CJ19XAM #2020-07-0214:38Joe LaneWhere are you located, which AWS_REGION are your non-ion machines located, and which AWS_REGION is your datomic cloud cluster deployed in?#2020-07-0214:44Guilherme PupolinFor these examples, I connected using datomic-cli from São Paulo in the cluster at us-east-1.
In the productive environment, it has a VPC Endpoint connecting our applications to Datomic in us-east-1.#2020-07-0214:51Joe LaneSo I understand, in prod, your datomic cluster is in us-east-1, and your applications connect to it from which AWS_REGION? Where are the machines themselves?#2020-07-0214:52Guilherme PupolinIn prod, both in us-east-1#2020-07-0214:57Joe LaneHave you gone through the "Decomposing the query" Example marshall posted?#2020-07-0215:35Guilherme PupolinYes I do. But I couldn’t improve any more than that (I got a better result just passing the merchant-id, I do not know a way to search better on this date ref).
[?entity :merchant-id ?merchant-id]
[?entity :transaction-time ?transaction-time]
[?transaction-time :utc-time ?transaction-time-utc]
[(>= ?transaction-time-utc ?transaction-time-start)]
[(<= ?transaction-time-utc ?transaction-time-end)
#2020-07-0216:09marciol@U05120CBV we noticed that what is hurting the query performance are all clauses related to time.#2020-07-0216:14Joe Lane@marciol Can you typehint the query clauses with ^java.util.Date#2020-07-0216:14marciolHmm, good idea @U0CJ19XAM
cc: @U016FDZFA2X#2020-07-0216:22Guilherme Pupolin@U0CJ19XAM in this way?
(d/q {:query '[:find ?entity
:in $ ?merchant-id ^java.util.Date ?transaction-time-start ^java.util.Date ?transaction-time-end
:where
[?entity :merchant-id ?merchant-id]
[?entity :transaction-time ?transaction-time]
[?transaction-time :utc-time ?transaction-time-utc]
[(>= ?transaction-time-utc ?transaction-time-start)]
[(<= ?transaction-time-utc ?transaction-time-end)]]
:args [(d/db (:conn client))
"beb9c7db-a7eb-4e56-8c4e-4db195566562"
#inst"2020-06-01T03:00:00.000-00:00"
#inst"2020-06-30T03:00:00.000-00:00"]
:timeout 50000})))#2020-07-0216:23Joe Lanehttps://docs.datomic.com/cloud/query/query-data-reference.html#calling-java-methods#2020-07-0216:24Joe LaneAlthough, it may not make a difference because you are using the custom comparators <= and >=.#2020-07-0216:27Joe LaneWhy did y'all decrease the timeout from 60 seconds to 50 seconds?#2020-07-0216:49marshall@U016FDZFA2X what are the schema definitions for all the attributes in the query#2020-07-0217:15kschltz#2020-07-0217:15kschltzThis is the one#2020-07-0217:15kschltzFrom what we know so far, the issue lies in the time nesting#2020-07-0217:15kschltz@U05120CBV#2020-07-0213:00kschltzIt takes around 50s to retrieve 14k results#2020-07-0213:02kschltz• We sliced the db using d/since without much improvement#2020-07-0213:04marciolIt’s a lot of time to return the result of such trivial query, must be something we can do to decrease this time.#2020-07-0213:05favilacould it also be the pull * and not the query itself?#2020-07-0213:07favilayour find looks odd (missing close paren). is that the whole thing?#2020-07-0213:19marciol@U016FDZFA2X#2020-07-0213:21marshall@marciol Have you separately tested the time it takes to query for the entity IDs and to pull the attributes from them#2020-07-0213:23marciol@U05120CBV we are going to do all this to get a more fine grained overview of what is happening.#2020-07-0213:23marshallalso review the decomposing a query example#2020-07-0213:04marshall@marciol @schultzkaue https://github.com/cognitect-labs/day-of-datomic-cloud/blob/master/tutorial/decomposing_a_query.clj
have you worked through the decomposing a query ?#2020-07-0216:34zhuxun2What is the idiomatic way to answer the question "when was this attribute last changed"?#2020-07-0216:37ghadibind the ?tx (the fourth component) in a clause, then join to the ?tx's :db/txInstant#2020-07-0216:37ghadi[?e :myAttr _ ?tx]
[?tx :db/txInstant ?time]
#2020-07-0216:53zhuxun2Is there a equivalent of "sort by" in datomic?#2020-07-0216:54zhuxun2Seems not: https://stackoverflow.com/a/30205147#2020-07-0217:03zhuxun2Then how do I query for something like top 10 entity with respect to an attribute? Do I have to query for all of them and then do a client-side sorting?#2020-07-0217:07favilaTake a look at d/seek-datoms and d/index-range on the peer api, and index-seek and index-pull on the peer#2020-07-0217:08favilayou could also try abusing nested queries a bit. You can call normal clojure code inside a query, so you could have an inner query that gets all results, and an outer query that sorts and limits them#2020-07-0217:31zhuxun2I guess by nested queries you mean something like this, right?
https://docs.datomic.com/cloud/query/query-data-reference.html#q#2020-07-0217:32favilayes#2020-07-0217:04zhuxun2There must be a better way ...#2020-07-0221:40zhuxun2Can I have an attribute storing an unspecified EDN?#2020-07-0221:40zhuxun2Or an unspecified nested data structure consisting of maps, vectors, and any of the supported basic types as leaves (i.e., a JSON-like structure)#2020-07-0223:21souenzzo@zhuxun2 you can use pr-str and store as string#2020-07-0223:46marshall@zhuxun2 Datomic is not intended for storing LOBs. You should avoid putting large objects directly in Datomic. Either split them into individual facts (datoms) or store the LOBs somewhere else (ddb, s3, etc) and store a reference to them in Datomic#2020-07-0309:54jeroenvandijk@U05120CBV Do you have a rule of thumb for when a string becomes too big and should be considered a LOB?#2020-07-0318:27ilshad@U0FT7SRLP 4Kb is the limit for strings#2020-07-0308:54Adrian SmithIs there a learning website with a collection of SQL queries and their Datomic equivalents?#2020-07-0315:59dvingonot sure about a comparison (of sql and datalog), but this tutorial is quite good
http://www.learndatalogtoday.org/#2020-07-0316:51ertugrulcetinhey guys, I'm considering to use Datomic Cloud, it seems that datomic.client.api does not have all functions that on-prem datomic.api does. Like datomic.api/entity, would it be a problem? Is there any alternative if I need to use this function?#2020-07-0319:09kschltzHi there @U0UL1KDLN we're using datomic cloud, and most of the times we use the pull api when we already have the entity id#2020-07-0319:19ertugrulcetin@UNAPH1QMN thank you for the info#2020-07-0609:54Linus EricssonDatomic Cloud and datomic on prem works quite differently. The Entity API expects the application to be able to cache data locally (like a Datomic Peer does), otherwise it has to do a lot of roundtrips to get an entity API working, which would defeat the purpose with en entity view since it would be very slow and ALWAYS has the n+1 error - that you have to do more roundtrips to the server to get more data. The entity API is meant to be a very quick way to navigate around the database.
You can get a similar (but not complete!) way of navigating parts of the databas using the pull expressions as Kaue describes above.#2020-07-0610:43ertugrulcetin@UQY3M3F6D thank you so much!#2020-07-0319:05kschltzI was wondering, if I were to shard my datomic cloud or to split my data in any way, would it make any sense to just create different dbs?
something like
(d/create-database client {:db-name "smurfs-1"})
(d/create-database client {:db-name "smurfs-2"})
#2020-07-0319:05kschltzthen query one or the other#2020-07-0319:06kschltzI'd rather have to query 500M datoms over one db or the other#2020-07-0319:06kschltzthan to query 1B in a single db#2020-07-0319:07kschltzdoes that make any sense from an architectural standpoint?#2020-07-0322:46Jon WalchI'm using Datomic Cloud
My data model is similar to
{:user/foo "foo"
:user/other-one "hi"
:user/bar [{:bar/bazed? true} {:bar/bazed? false}]}
In one query, I want to pull :user/foo :user/other-one and everything in :user/bar where :bar/bazed? is false.
The issue that I'm running into is that I want :user/foo and :user/other-one no matter what, but if there are no :bar/bazed? that are false, the whole query returns an empty vector because of implicit joins.
I'm currently doing what I need with two queries, but this path is extremely hot so I'd like to reduce it to one. I also don't want to pull all of :user/bar because it could be extremely large, where as the number of items in :user/bar with user/bazed? equal to false will be quite small.#2020-07-0415:03favilaI think you are asking for a list of maps of users where each user only contains the bar entities where bar/bazed? is false?#2020-07-0415:03favila(does it specifically have to be false, or is unasserted the same?)#2020-07-0415:04favilaYou can do it with a nested query and joining yourself#2020-07-0415:07favilaor issue two queries in parallel and join yourself#2020-07-0415:09favilaI recommend adding the false bars to the user map under a different name so the keyword has a globally-unique meaning.#2020-07-0415:09favilaI recommend adding the false bars to the user map under a different name so the keyword has a globally-unique meaning.#2020-07-0519:52Jon Walch@U09R86PA4 I'm looking for a specific user. I have a unique attribute to look them up with. I want the user no matter what, but I also want everything in user/bar where bar/bazed? is false. If no bar/bazed? is false, I want user/bar to be returned as an empty vector#2020-07-0520:25Jon WalchI think I may go the async query route instead of trying to do it all in one blocking query#2020-07-0609:59Linus EricssonI think you should consider changing the boolean to an additional reference between the user and the data map object instead. This way the structure of the database helps you retrieve the correct data. It makes the change of the boolean data a bit more complicated, but it sounds like it would be worth it in this case.
So for instance:
user -> :mail/inbox #{all mail in the inbox}
user -> :mail/unread-in-inbox #{the unread mails from the inbox}
obviously one has to update both the inbox and unread-in-inbox when removing an unread email but it can still be a simpler solution for you.
you can also just have two different attributes
:mail/inbox and :mail/unread and query for where both links exists. The :mail/unread is could then be sort of isomorphic with :bar/bazed? in your example above.#2020-07-0617:55Jon Walch@UQY3M3F6D Thanks for weighing in! That's a good suggestion!#2020-07-0411:55ashnurHi, continuing from here: https://clojurians.slack.com/archives/C053AK3F9/p1593859141412800
I cloned the ion-starter repo and I am reading https://docs.datomic.com/cloud/ions/ions-tutorial.html#orge88f23e
I have to make sure I have installed the ion-dev tools. So I changed the config files according to that documentation. Does that constitutes of making sure? Is there a programmatic test I could do that changing files is actually installing things? #2020-07-0413:11Alex Miller (Clojure team)I know you’ve tried some stuff - can you roll back as close as you can to the tutorial and then share your error message?#2020-07-0413:42souenzzo@jonwalch
[:find (pull ?e [:user/foo :user/other-one {:user/bar [*]}]) :where [?e :user/bar ?bar] [?bar :bar/bazed?]] ?#2020-07-0519:57Jon WalchThe issue here is that I also need to filter on :user/foo to make sure I'm getting the correct user.#2020-07-0520:07Jon WalchYeah just tried a version of this. I get no results for the entire query if nothing in :user/bar has :bar/bazed? set to false.#2020-07-0413:57ashnurAfaik, I am on par with the tutorial, I did the edits described there. But actually I started from aws marketplace first because I know less about aws than about datomic. (much much much less).#2020-07-0413:57ashnurHowever there isn't an error message unless I try to do something more, but I am not sure what I should try. I know that some things work and one thing doesn't.#2020-07-0413:59ashnurLet me create some gists so you can see both.#2020-07-0414:08ashnurOne thing that makes it difficult to roll back is that I never knew I had a m2/settings file and similar things : )#2020-07-0414:09ashnurRight now I have to admit that nothing works, not even what worked in the morning, I get Error building classpath. Could not find artifact com.datomic:ion:jar:0.9.43 in central () to every command I try#2020-07-0414:56souenzzo@ashnur check your ~/.m2/settings.xml#2020-07-0417:14ashnurObviously, it would be nicer if I knew what to check on it, like does it have exactly 137 bytes in it or what 🙂
https://gist.github.com/ashnur/62a62afa1c538c249110cfc0202b524a#2020-07-0415:02ashnurI did#2020-07-0415:21pvillegas12I am trying to do a recursive query
[:person/firstName :person/lastName {:person/friends ...}]
However, I would like to impose a condition on the recursion. For example, if the friend has the name “bob”. It looks like the recursion is on the read side of the pull, but I was wondering if there is some way to do recursion and have a condition on the friends (recursion attribute)?#2020-07-0418:15Joe Lane@ashnur Are you still running into issues?#2020-07-0418:17ashnurThey haven't been resolved yet#2020-07-0418:23Joe LaneIn my ~/.clojure/deps.edn I have these entries for my maven repos and an ion-dev alias
:aliases {:ion-dev {:extra-deps {com.datomic/ion-dev {:mvn/version "0.9.265"}}
:main-opts ["-m" "datomic.ion.dev"]}}
:mvn/repos {"datomic-cloud" {:url ""}
"central" {:url ""}
"clojars" {:url ""}}#2020-07-0418:23Joe LaneOf particular importance is the "datomic-cloud" entry under :mvn/repos#2020-07-0418:23ashnurI do have that too#2020-07-0418:24Joe LaneWhat is your OS?#2020-07-0418:24ashnurlinux#2020-07-0418:25ashnurI even tried this, where the repo config is in the alias https://github.com/Datomic/ion-starter/blob/master/examples/.clojure/deps.edn#2020-07-0418:25Joe LaneCan you show me the project's deps.edn?#2020-07-0418:28ashnurhttps://gist.github.com/ashnur/fc2e517bbe6ee3fe1d9ed2cf8c14e1e8#2020-07-0418:30Joe LaneAlso, how are you "running" your project locally?
What is the exact operation you're using to start a repl?
Why do you have ?region-eu-west-1 at the end of the datomic-releases entry?#2020-07-0418:30ashnurI am not sure what "running locally" would mean in this context#2020-07-0418:31ashnurI don't usually start any repls#2020-07-0418:31ashnurand the docs says https://clojure.org/reference/deps_and_cli#_procurers#2020-07-0418:32ashnurwhat you call 'project' here is literally a core.clj with a single hello world function, I can run it many ways#2020-07-0418:33Joe LaneI don't think you're interpreting the procurers section right.#2020-07-0418:33ashnurlast time it worked I ran clj -m nrepl.cmdline --middleware "[cider.nrepl/cider-middleware]" --interactive but obviously that also doesn't do anything right now just says the same error as above#2020-07-0418:33ashnurI haven't interpreted it at all, it was linked in a forum post#2020-07-0418:34Joe LaneWhich forum post are you referring to?#2020-07-0418:35ashnurhttps://forum.datomic.com/t/issue-retrieving-com-datomic-ion-dependency-from-datomic-cloud-maven-repo/508#2020-07-0418:35ashnurIf I am doing something stupid, just tell me 🙂#2020-07-0418:36Joe LaneWhat is the output of aws s3 cp .#2020-07-0418:40Joe LaneAlso, nothing in the forum post you sent makes me think you need to append ?region-eu-west-1#2020-07-0418:41Joe LaneAfter you paste the output of the aws s3 cp ... command, can you replace the :mvn/repos entry in your deps.edn with all three entries in the message I pasted above? https://clojurians.slack.com/archives/C03RZMDSH/p1593886989420200#2020-07-0418:42Joe LaneThen run simply clj in the project root.#2020-07-0418:42Joe LaneNo nrepl or anything.#2020-07-0418:49Joe Lane@ me when you do the above#2020-07-0419:38ashnurfatal error: An error occurred (403) when calling the HeadObject operation: Forbidden#2020-07-0420:05Joe Lane@ashnur I didn't see this until just now.
What is the output of
aws sts get-caller-identity
#2020-07-0420:08ashnur{
"UserId": "AIDAT24PJRSJ6WKCBUGPZ",
"Account": "263904136339",
"Arn": "arn:aws:iam::263904136339:user/same-page-dev"
}
#2020-07-0420:06ashnurno worries, I think I am having some s3 access errors#2020-07-0420:08Alex Miller (Clojure team)do you have aws env vars set?#2020-07-0420:09ashnuryes, but wait a second, I just have a terrible suspicion, let me check something#2020-07-0420:39ashnurOk, I had to check that there are no typos, I wish I'd found one. I made some edits for consistency, but the user is added to the group, the policy is attached to the group and AWS shows that the user is active and it's used. I am not sure why the s3 thing says forbidden, I will try to debug that because even if it's unrelated, it should be working anyway, but maybe it helps.#2020-07-0422:41Alex Miller (Clojure team)clj is trying to download the jar from the cloud s3 maven bucket. what region is your user in? I assume you're not running this from inside aws or anything like that.#2020-07-0506:41ashnurI am running this on my laptop, which I assume is not inside aws, but I am unfamiliar with the terminology, tell me if I misunderstood, please.#2020-07-0421:27daniel.spanielis there a way to query for an entity and its children (recursively) but also ( while recursing ) exclude certain children? I have query like this
(ffirst
(d/q '[:find (pull ?e pattern)
:in $ pattern ?tree-id ?company-id
:where
[?e :accounting-tree/id ?tree-id]
[?e :accounting-account/children ?c]
(or-join [?c ?company-id]
[(missing? $ ?c :entity/company)]
[?c :entity/company ?company-id])
]
db '[* {:accounting-account/children ...}]
tree-id company-id))
#2020-07-0421:27daniel.spanielbut it does not exclude the children without that company-id . I have tried many variations but no luck#2020-07-0422:37bamarcoI am trying to allow one of my ions to access dynamodb. I am following https://docs.datomic.com/cloud/operation/access-control.html#authorize-ions
When I get to the step:
> Adding an IAM Policy to Datomic Nodes
>
> The Datomic Compute CF template lets you specify a custom policy via the template parameter named `NodePolicyArn`. In the console UI this parameter appears under:
Optional Configuration | Existing IAM managed policy for instances
> You can set or update your custom node policy at any time by performing a https://docs.datomic.com/cloud/operation/howto.html#update-parameter, setting the `NodePolicyArn` to the ARN of your policy.
>
> Neither https://console.aws.amazon.com/console/home?region=us-east-1 nor https://console.aws.amazon.com/iam/home?region=us-east-1#/home seems to have an "Optional Configuration" option#2020-07-0500:34Joe Lane@mail524 What release of Datomic Cloud are you using?
I'm on the latest and if I wanted to add a node policy to my instances I would:
1. Find and select the compute stack
2. Click the update button on the top right
3. Use the current template
4. Scroll to the bottom of the "Specify Stack Details" page
5. Add my Policy Arn#2020-07-0500:40Joe Lane2.#2020-07-0500:40Joe Lane3.#2020-07-0500:40Joe Lane4. (Top)#2020-07-0500:41Joe Lane4. (Bottom)#2020-07-0501:55bamarcoThanks @lanejo01 I got it working well enough to move on to my next error. I'm running a solo topology com.datomic/client-cloud #:mvn{:version "0.8.81"}. Now I just have to figure out the function signature for the websockets $connect function.#2020-07-0507:18ashnurI tried running the aws s3 cp . --debug 2> log and this is the result 2020-07-05 08:14:56,113 - MainThread - urllib3.connectionpool - DEBUG - "HEAD /maven/releases/com/datomic/ion/0.9.7/ion-0.9.7.jar HTTP/1.1" 403 0
At this point I am not sure where should I look next for any fix, please if you have even just guesses, don't hold back, it would help me learn.#2020-07-0507:19ashnurfull log https://gist.githubusercontent.com/ashnur/60564b6ff515f7b317aaedb359ff24f3/raw/a397ba366dc7175f48a8a64418e0ab3776f9c4ba/aws.forbidden.s3.cp.log#2020-07-0612:33marshallYour AWS credentials need to allow access to the public datomic maven repo.
If you are not running as an AWS administrator (not just the datomic admin policy), youll need to add an s3 read permission for the datomic maven bucket to your user#2020-07-0613:11ashnuroh, that sounds it!#2020-07-0613:21ashnurit's just that I am severely confused atm. what 's3 read permission for a specific bucket' means.
Should I copy what is in the textbox? https://docs.datomic.com/cloud/operation/access-control.html or should I use https://awspolicygen.s3.amazonaws.com/policygen.html to generate something?#2020-07-0613:22marshallthis is a separate issue/policy from the datomic admin policy#2020-07-0613:22marshallone second, let me find an example#2020-07-0613:23ashnurok, thanks for clearing up that confusion 🙂#2020-07-0613:30marshall{
"Sid": "VisualEditor3",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::datomic-releases-1fc2183a/*",
"arn:aws:s3:::ddatomic-releases-1fc2183a"
]
}#2020-07-0613:30marshallsomething like that#2020-07-0613:30marshallthe issue is that by default AWS users/roles/etc have no permissions#2020-07-0613:31marshallso if you don’t explicitly allow them to read from a bucket, even if that bucket is publicly accessible, the client permissions for the AWS role will prevent#2020-07-0613:40ashnurmakes sense, I was suspicious of something like this, but being completely new to most of the terms, I got lost easily and since I used search, it lead me to the wrong places#2020-07-0613:40marshallwe are actively working on improving the docs/forum search#2020-07-0613:40marshallfor finding answers to this (and other) questions#2020-07-0613:41ashnurwell, if you know someone who works on the datomic website/docs, I would happily help for free#2020-07-0614:59marshallI just realized the role rule i posted wasnt quite right#2020-07-0614:59marshallgive me a few to correct it#2020-07-0615:00marshallfixed#2020-07-1010:04ashnurFinally I got time to get back to this, but it says that Policy has invalid resource
this is the json I am trying to save:
{
"Id": "Policy1594355345891",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:GetBucketLocation"],
"Resource": "arn:aws:s3:::datomic-releases-1fc2183a",
"Principal": {
"AWS": ["arn:aws:iam::263904136339:user/same-page-dev"]
}
}
]
}#2020-07-0510:25ashnurIf I could at least know if the error is with my local or my remote aws config, but the more docs I read the more confused I get. Nothing seems to have any effect for the better.#2020-07-0513:00Alex Miller (Clojure team)the log is actually very helpful as that removes everything but the s3 call. your iam user is in eu-west-1 but is correctly trying to get to the bucket in us-east-1. from the head request failing this is almost certainly something to do with your iam permissions for this user, like not being permitted to do s3 downloads#2020-07-0513:03Alex Miller (Clojure team)at the top of the tutorial, there is a list of prereqs, the last of which are
> Run in an environment with https://docs.datomic.com/cloud/getting-started/connecting.html.
> Have https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html#jf_administrator permissions.#2020-07-0513:04Alex Miller (Clojure team)I'm thinking maybe your iam user does not have aws administrator permissions?#2020-07-0513:04Alex Miller (Clojure team)the steps are at https://docs.datomic.com/cloud/getting-started/configure-access.html#authorize-user#2020-07-0513:20ashnurI will double check it now#2020-07-0513:36ashnurAfaik I can tell, everything is set as it is written. I have checked this yesterday when I said that I had a suspicion. I wrote it then that "the user is added to the group the policy is attached to the group", hoping if that's not enough someone will point it out. Should I make screenshots? What would be a troubleshoot option here?#2020-07-0513:44Alex Miller (Clojure team)you used the Datomic administrator policy?#2020-07-0513:53ashnurI think yes, but these are specifically the kind of questions that if I misunderstand it even a bit, that can lead to much confusion.
When I subscribed, the template created a policy called arn:aws:iam::263904136339:policy/datomic-admin-datomic-same-page-eu-west-1 which I then attached to a new group and my user is added to this group, so if I go to https://console.aws.amazon.com/iam/home?#/users/same-page-dev?section=permissions where same-page-dev is the username, I can see the name of the policy listed. (datomic-admin-datomic-same-page-eu-west-1)#2020-07-0513:54ashnurI also wish I could specify a default profile for datomic, but I haven't found this without specifying a default for aws, but that makes the named profile thing a bit useless right now, but probably I just misunderstand the reason for these named profiles#2020-07-0514:43Alex Miller (Clojure team)that sounds right, but I'm not an expert on this end of things. maybe @jaret or @marshall can confirm tomorrow#2020-07-0515:23ashnurthanks, I think I will just clear anything and start completely over#2020-07-0613:34jaretThanks @U064X3EF3! @U0VQ4N5EE catching up from the weekend, were you able to resolve after starting over or are you still seeing permission errors? If so, it may be useful to log a case to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> so we can share your specific policy and review. I suspect that you are in fact having IAM issues and have previously seen issues with setting the specific region. I can also double check how you have your profiles configured, because using profiles is our recommended resolution to having local AWS creds defaulted to a different AWS region than your Datomic system.#2020-07-0613:38jaretScratch that I see that @marshall spotted the issue and helped you up higher in the the threads.#2020-07-0613:38marshallhttps://clojurians.slack.com/archives/C03RZMDSH/p1594042271484500?thread_ts=1593933535.453500&cid=C03RZMDSH#2020-07-1010:04ashnurlinking for jaret, sorry if redundant : ) I don't want to spam the channel https://clojurians.slack.com/archives/C03RZMDSH/p1594375442110400?thread_ts=1593933535.453500&cid=C03RZMDSH#2020-07-1010:51ashnuralso tried
{
"Id": "Policy1594355345891",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DatomicS3BucketAccess",
"Effect": "Allow",
"Action": [
"*"
],
"Resource": [
"arn:aws:s3:::datomic-releases-1fc2183a",
"arn:aws:s3:::datomic-releases-1fc2183a/*",
"arn:aws:s3:::datomic-code-7cf69135-6e19-4e99-878e-9c3f4a48ad48",
"arn:aws:s3:::datomic-code-7cf69135-6e19-4e99-878e-9c3f4a48ad48/*"
]
}
]
}
But this says Missing required field Principal#2020-07-0515:24ashnursometimes it helps 🙂#2020-07-0517:19bamarcoI am attempting to log a message by using cast/dev as shown here https://docs.datomic.com/cloud/ions/ions-monitoring.html#dev
The first thing I do in my ion function is call (cast/dev {:msg "socket-connect" :req (str req)})
I can not find this message output in cloudwatch anywhere. I have checked the log group for the datomic system overall and for the specific connect ion. I also tried calling with cast/event with no luck.
I do get a thrown error printed out for my function, but I don't get the log that happens before that error occurs.#2020-07-0523:52Joe Lane@mail524
1. Dev is only for local, and will never show up in cloudwatch
2. If the payload is too large it won't be submitted to cloudwatch.
3. In the process of debugging like this, try printing the (cast/event {:msg "socket-connect" :req (str (keys req))})#2020-07-0604:12ataggartIs there a way to unify two logic variables together, similar to how ground unifies a logic variable with a constant? I tried =, but it doesn't appear to work, as this contrived example shows:
(def query '[:find ?x ?y
:in $ % ?x
:where
(foo? ?x ?y)])
(def ground-y '[[(foo? ?x ?y)
[(ground :y) ?y]]])
(def unify-x-y '[[(foo? ?x ?y)
[(= ?x ?y)]]])
(d/q query (d/db conn) ground-y :x)
; #{[:x :y]}
(d/q query (d/db conn) unify-x-y :x)
; Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg
; (error.clj:79). :db.error/insufficient-binding [?y] not bound in expression
; clause: [(= ?x ?y)]
#2020-07-0613:20favilathe identity function#2020-07-0613:21favila[(identity ?x) ?y]#2020-07-0613:22favilaI’m not sure what you want to do is called unification#2020-07-0613:23favilayou want a “alias” or “clone” of a set with a different name so you can avoid some unifications or unify in different clauses#2020-07-0613:24favilathis identity is also useful for self-joins:#2020-07-0613:25favila[(identity ?x) ?y] [(!= ?x ?y)] [?x :foo/position ?xp] [?y :foo/position ?yp] [(> ?xp ?yp)] silly example#2020-07-0622:47ataggart@U09R86PA4 That did it, thanks!#2020-07-0610:39Ramon Rios:db.error/not-an-entity Unable to resolve entity: :policy-coverage/vt"
Hello, what could be the reasons that my datomic is not founding this field?#2020-07-0613:37marshallyou have likely not installed an attribute :policy-coverage/vt#2020-07-0613:38Ramon RiosI did it, it was that. Now my issue is convert the date type#2020-07-0613:38Ramon RiosNow i'm looking after how to convert local-date to inst#2020-07-0611:00ertugrulcetinHi all, there is a Datomic migration library called conformity which supports only Peer API not Client API, so I was considering to use Datomic Cloud and is there any migration library that supports Client API?#2020-07-0612:13kschltzDepending on how active is this repo, I would consider reaching out to conformity's maintainers and ask if there are any plans on supporting client api, if not, maybe asses how much work would be needed to add it yourself, If you're lucky, maybe it isnt much of a hassle#2020-07-0613:30ertugrulcetin@UNAPH1QMN how are you guys handling migrations in Datomic Cloud? Just sending all edns to transact fn?#2020-07-0613:46kschltzBasically, yes. We centered our schema in a exclusive repository, shared among the contexts where it is relevant and agreed upon only extend it, never removing fields, something along the lines of: https://docs.datomic.com/on-prem/best-practices.html#grow-schema#2020-07-0614:01ertugrulcetinThanks#2020-07-0621:13bamarco@U0UL1KDLN I am considering updating conformity for use with cloud (I am still not completely settled on migration). It should not be too difficult I rewrote the internals to work with datascript at one point (never submitted a pr though as it seemed pretty specific to our use case). I am trying to get my cloud instance up and running first though.#2020-07-0621:08bamarco@lanejo01 Thanks I got the logging working. I am back to the IAM permissions problem again unfortunately.
I am getting arn:aws:sts::<42>:assumed-role/<my-datomic>-Compute-<compute-id>-us-east-1/i-<other-id> is not authorized to perform: dynamodb:PutItem on resource: arn:aws:dynamodb:us-east-1:<42>:table/sockets (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: AccessDeniedException; Request ID: <ABCDEFG>) in my lambda output
I have attached arn:aws:iam::<42>:policy/sockets-lambda-policy to the Optional Configuration Existing IAM policy for instances for my <my-datomic> stack. (that is root stack, not the compute stack, I am not sure if that was correct, but when I went to update the compute stack it recommended I update the root stack)
The policy is the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:BatchGetItem",
"dynamodb:GetItem",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:PutItem",
"dynamodb:DeleteItem"
],
"Resource": "arn:aws:dynamodb:us-east-1:<42>:table/socket"
}
]
}#2020-07-0621:09Joe LaneYou have to bounce the compute nodes before the policy is picked up, have you done that?#2020-07-0621:10Joe LaneAlso, your resource is socket but the error message is sockets#2020-07-0621:10Joe Lane@mail524 ^ Maybe check that first?#2020-07-0621:14bamarcoI don't know what bounding the compute nodes means#2020-07-0621:14bamarcobouncing*#2020-07-0621:16Joe LaneRestarting the machines, either through a deployment, redeployment, an upgrade, or adjusting the EC2 ASG. A deployment would likely be simplest#2020-07-0621:17Joe LaneBut first I would check if the resource name in your policy is correct.#2020-07-2320:20faviladatomic.api/q does not take a map#2020-07-2320:20favilayou are thinking of either datomic.client.api/q or datomic.api/query#2020-07-2320:21favilathat said, (datomic.api/q {:find …} arg1 arg2) works#2020-07-2320:21favila(i.e. anywhere the vector form of a query is accepted, a map form is ok too--the vector is just sugar for the map)#2020-07-2320:23Drew VerleeI'm looking at these docs:
https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/q
> query can be a map, list or string#2020-07-2320:24Drew Verleeoh#2020-07-2320:24Drew Verleequery & inputs#2020-07-2320:25Drew Verleeso qseq (which is my final goal) does take a map. i just sort of skipped reading between the lines it seems.
Usage: (qseq query-map)
#2020-07-2414:53tvaughanI have Datomic On-Prem 1.0.6165. When I try to run the console script I get ERROR: This version of Console requires Datomic 0.8.4096.0 to run What am I doing wrong?#2020-07-2415:18jaretHey @tvaughan! Sorry about this, we released a fix for this issue with our standalone conole. You can download it from: https://my.datomic.com/downloads/console#2020-07-2415:19jaretYou’ll need to follow the included README and use bin/install-console to install over the version of console in your release.#2020-07-2415:19jaretWe’re going to correct this issue in the next release of Datomic On-Prem by packaging this version of console with the download.#2020-07-2415:20tvaughanOK. I saw The Datomic Console is included in the Datomic Pro distribution on https://my.datomic.com/downloads/console. Perhaps this should be updated too. Thanks!#2020-07-2415:21jaretOh it is still going to be included going forward its just that this particular version of console had a bug that prevented it from starting with that particular version of Console that wasn’t caught by our CI#2020-07-2415:21jaretBut because we include it, we can’t rip it out after the fact 😞#2020-07-2415:23jarethttps://docs.datomic.com/on-prem/changes.html#console-225#2020-07-2415:25tvaughanI mean a note like "Users of versions x, y, and z will need to download the console separately. Follow these instructions..." I saw this download page and thought I had everything I needed#2020-07-2415:44jaretAh understood! I’ll take a look at that and see if we can get a warning there or some kind of call out#2020-07-2416:11marciolI work at a company that is the second in terms of Clojure developers in Brazil, and the news about the acquisition of Cognitect by Nubank are concerning our C-level board, given that we compete with Nubank in several fronts.
We are using the Datomic Cloud offer right now, but as I really want to still use Datomic I'm tempted to suggest a migration to Datomic On-Prem as a way to calm down their anxiety about the whole History. The question is, Datomic On-Prem will be a perennial offer for the foreseeable future? Can I strongly defend this option?
cc: @stuarthalloway @marshall @alexmiller#2020-07-2416:22stuarthallowayHi @marciol. In addition to what @alexmiller said, did you see this in Slack yesterday? https://clojurians.slack.com/archives/C03RZMDSH/p1595512519395700#2020-07-2416:24marciolAh yes, I saw it @stuarthalloway, and I believe sincerely that the Clojure and all Ecosystem, including Datomic, will benefit even more. I need only some arguments to deal with business people.#2020-07-2416:25stuarthallowayTo the extend this is somehow about On-Prem vs. Cloud, we plan to continue to enhance both products in parallel, as we have to date.#2020-07-2416:26marciolNice, so I’ll use it as argument and this can be an real option to relief the anxiety 😄#2020-07-2416:27marciolYou should know how paranoid about competition and thinks like that business people went.#2020-07-2416:20ziltiIs there by any chance a software that allows to edit and add data in a Datomic database? Basically a Datomic Console with transact capabilities#2020-07-2416:21Alex Miller (Clojure team)@marciol From https://building.nubank.com.br/welcoming-cognitect-nubank/ (CTO of Nubank): "
• The existing development team will continue to enhance both Datomic products: Pro and Cloud
• Nubank is best served by the widespread use of Datomic at other companies. Datomic will continue to be developed as a commercially available, general-purpose database (as opposed to being pulled in-house or restricted)"
#2020-07-2416:41tvaughanThe console is not compatible with a "mem peer server"?
Removing storage with unsupported protocol: mem = datomic:
No storages specified
#2020-07-2416:47favila1. The console is a peer, not a client. So it can’t connect to peer-servers anyway, mem or not.
2. The console doesn’t support mem peers. In theory it could, but that would be nearly pointless because it has no transaction capabilities.#2020-07-2416:48tvaughanGotcha. That makes sense. Thanks for the clarification @U09R86PA4#2020-07-2417:39jarethttps://forum.datomic.com/t/dev-local-0-9-184-now-available/1537#2020-07-2613:47husaynthe datomic-console only seems to be able to do queries … anyone know of a datomic tool which can be used to perform transactions ?#2020-07-3018:52bhurlowhyperfiddle#2020-08-0312:15husaynlooks promising , thanks @U0FHWANJK#2020-07-2618:55vinnyataidehello guys. any idea why the datomic rest-api is deprecated?#2020-07-2618:56vinnyataideI loved it because that makes trivial to interface my backend with my clojure natal apps#2020-07-2618:56vinnyataidebut I am concerned#2020-07-2618:57vinnyataidethats someday I'll have to learn all the aws stack to keep using datomic lol#2020-07-2619:32aaroncodingI'm trying to use conformity with datomic pro (on-prem). It breaks though, seems to be trying to require datomic.api instead of datomic.client.api.
Is there anything I can be doing differently to make it work? Or even an alternative to conformity?#2020-07-2619:54ghadisearch for cloudformity @coding.aaronp #2020-07-2619:54ghadiI’m on mobile otherwise I’d link you#2020-07-2619:56aaroncodingawesome thanks!#2020-07-2620:15Giovani Altelino@husayn , I use the REPL to do the transactions, I find easier to "replay" the transactions too, since I can just save everything in a namespace.
https://github.com/giovanialtelino/hackernews-lacinia-datomic/blob/master/src/hackernews_lacinia_datomic/db_start.clj#2020-07-2620:46husayn@galtelino yeah, that’s what I use now, was just hoping there was a better tool#2020-07-2713:30SvenIs it possible to preload multiple databases after instance restart or speed up the initial d/connect? I’d like to deploy ions often but every deployment results in the initial d/connect to a non preloaded db taking 10+ seconds.#2020-07-2713:36Joe Lane@sl What kind of topology are you using?#2020-07-2713:49Sven@lanejo01 solo for dev and testing and production for qa and live#2020-07-2713:53Joe Lane• Is it the same production topology for QA and Live?
• How do you know the problem the d/connect time is taking a long time?
• Are you sure it's not because of lambda coldstart?
• Have you considered using HTTP DIrect?#2020-07-2714:03Joe Lane@sl I'm not sure how you're measuring the time, but the first thing I would look at is switching to Http Direct if you aren't already using it.#2020-07-2714:03Sven1) yes. Though the issue is way more pronounced on solo.
3) lambda cold starts occasionally contribute to the issue but I run lambda warmers to help mitigate this
4) I am using appsync so I have to invoke lambdas#2020-07-2714:05Joe Lane3) I think that lambdas are rebuilt on redeploy so I'm not sure your warmers would be able to help
4) Doesn't appsync also support an http proxy?#2020-07-2714:33Svenhmmm, I have somehow completely missed ions with HTTP direct. I’ll do some testing on production topology. Thanks for that tip!
2) as of d/connect taking a long time - when I manually reboot the solo topology instance and after restart connect to that system from my laptop (bypassing the lambda function altogether) then the first connect +query/transaction takes a long time.#2020-07-2714:33David PhamWith dev-local, what are the limitations about storage, number of transactors and readers?#2020-07-2716:30stuarthallowayHi David. dev-local is in process, so there are no transactors. Memory usage is described at https://docs.datomic.com/cloud/dev-local.html#limitations.#2020-07-2905:19David PhamThanks Stuart!#2020-07-2715:07kschltzHi there, I've heard rumours about datomic cloud support for cross db queries, do you guys know something about this? Any reading material on the subject is more than welcome#2020-07-2716:05Joe LaneCross DB queries in cloud refers to the same db at two different points on it's timeline, not two different databases across timeline's.#2020-07-2717:09kschltzI see, given that I have db-A and db-B, isn't there any built-in features to support simultaneous queries in both bases?#2020-07-2721:36kschltzOther thing that isn't very clear to me regarding datomic cloud. Suppose I have a production topology with several dbs, would the write ops interfere with one another among those databases or are they served by different transactors?#2020-07-2723:05Joe Lane@schultzkaue Different transactors. They would not compete for resources in the way you described.#2020-07-2723:26kschltzGreat, thanks#2020-07-2816:43stuarthallowayI would just add that you can increase the number of processes in the primary compute group if you have many databases: https://docs.datomic.com/cloud/operation/scaling.html#database-scaling#2020-07-2914:24kschltzThat would be a perfect fit#2020-07-2914:24kschltzthx#2020-07-2804:06zebuIs there a way to restore into dev-local a backup taken from datomic-free?#2020-07-2812:59stuarthallowayNot at present. There a number of differences in core attributes. You would have to write a program that reads the log from one database, keeps track of entity ids, drops or alters unsupported things (e.g. the bytes values type) and transacts into the other database.#2020-07-2813:21zebuThanks Stu 🙂 I'll look into that#2020-07-2816:05fugbixGood evening everyone!! Is there a way to manipulate arrays with Datomic? (unfortunately I can’t use tuples, as they’re limited to 8 scalars).#2020-07-2816:07favilaI think that limit only applies to heterogenous tuples?#2020-07-2816:07favilahttps://docs.datomic.com/on-prem/schema.html#homogeneous-tuples vs https://docs.datomic.com/on-prem/schema.html#heterogeneous-tuples#2020-07-2816:24fugbixWell I thought so too, but apparently I am unable to transact tuples larger than 8 values using :db.type/tuple :
(d/transact conn [{:db/ident :weights
:db/valueType :db.type/tuple
:db/tupleType :db.type/double
:db/cardinality :db.cardinality/one}])
(d/transact conn [{:weights [0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8]}])
(d/transact conn [{:weights [0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9]}])#2020-07-2816:26fugbixhttps://docs.datomic.com/on-prem/schema.html#tuples
says: A tuple is a collection of 2-8 scalar values#2020-07-2816:41favilathat is unfortunate#2020-07-2816:47favilaconsider using bytes and Double/doubleToLongBits?#2020-07-2816:49fugbixI’ll give this a try thank you!!#2020-07-2816:49favilaor DataOutputStream#2020-07-2816:49favila(http://java.io class)#2020-07-2816:49fugbixThanks a lot#2020-07-2819:09Nassindev-local can migrated to the peer server correct?#2020-07-2820:01Nassinguess it's just a matter pointing the transactor to the data dir#2020-07-2912:24arohnerAre there any recommendations on the size of a cardinalityMany attribute? Is there a problem with storing a million uuids in a single datom EAV?#2020-07-2912:53stuarthallowayThere is no particular limit, but you should keep in mind the memory implications of future use.#2020-07-2912:54stuarthallowayFor example, if you gradually build up 1 million EAs, and then retract the entire entity, the transaction that does the retraction will have 1 million datoms in it.#2020-07-2912:55stuarthallowayAlso consider pull expressions, which might have been written (or displayed in a UI) with the presumption that their results are smallish and don't need to be e.g. paginated.#2020-07-2912:58stuarthallowayPrograms consuming a high cardinality attribute may want to use https://docs.datomic.com/cloud/query/query-index-pull.html#aevt to consume in chunks.#2020-07-2913:14arohnerThanks#2020-07-2914:28souenzzoReminder: pull by default get only 1000 elements on ref-to-many
https://docs.datomic.com/on-prem/pull.html#limit-option#2020-07-2915:41NassinFor example, if you gradually build up 1 million EAs, and then retract the entire entity, the transaction that does the retraction will have 1 million datoms in it.#2020-07-2915:41NassinOnly if isComponent is true correct?#2020-07-2917:07favila@U011VD1RDQT no, isComponent will propagate the delete to other entities#2020-07-2917:09favila[E A 1millionV] is going to delete one million E datoms regardless of whether A is an isComponent attr.#2020-07-2917:14Nassinah true, was thinking it was of type :db.type/ref :+1:#2020-07-2914:37kschltzHi there.
My current cenario is that we're using datomic cloud in one of our major services and it is around 60M entities/3.5B datoms and some particular queries are under performatic.
As we plan to grow some orders of magnitude, I was exploring alternatives to escalate both our writes and reads.
From my understanding so far, given that I'm able to scale the number of processors to serve my dbs, and transactors dont compete for resources among those dbs, I started experimenting with the following:
1 Have my service write in parallel to multiple dbs (let's say db0 db1 db2 all with the same schema), ensuring that the same entity always end up in the correct db so I don't end up with partial data split across my databases
2 When querying, I issue them in parallel, then merge the results in my application, something like
(pcalls query-for-satellites0 query-for-satellites1 query-for-satellites2)
So far, this parallel read/write cenario has proven to be really performatic
Now my question to you guys is if I'm missing on something, or are there any achitectural gotchas that would make this a bad idea?#2020-07-2919:51marciol@UNAPH1QMN It’d be nice to know if anyone already experimented this kind of topology.
You are sharding your data across several db’s and writing/issuing queries in parallel right?#2020-07-2920:03kschltzYup#2020-07-3013:31cpdeanIs it possible to save rules to a datomic database? I've noticed that datalog rules seem to only be used (in the examples in the docs) when scoped to a single query request
; use recursive rules to implement a graph traversal
; (copied from learndatalogtoday)
(d/q {:query '{:find [?sequel]
:in [$ % ?title]
:where [[?m :movie/title ?title]
(sequels ?m ?s)
[?s :movie/title ?sequel]]}
:args [@loaded-db
'[[(sequels ?m1 ?m2) [?m1 :movie/sequel ?m2]]
[(sequels ?m1 ?m2) [?m :movie/sequel ?m2] (sequels ?m1 ?m)]]
"Mad Max"]})
Is it possible to save a rule to a database so that requests do not need to specify all of their rules like that? I'm looking at modelling programming languages in datalog and so there will be a lot of foundational rules that need to be added and then higher-level ones that build on top of those.#2020-07-3014:47val_waeselynck@UGHND87PG you may want to read about the perils of stored procedures 🙂
But AFAICT, for your use case, you don't really need durable storage or rules, you merely need calling convenience. I suggest you either put all your rules in a Clojure Var, or use a library like https://github.com/vvvvalvalval/datalog-rules (shameless plug).#2020-07-3014:50val_waeselynckAll that being said, datalog rules are just EDN data, nothing keeps you from storing them e.g in :db.type/string attributes.#2020-07-3016:34cpdeangotcha so it's idiomatic to just collect rules that define various bits of business logic on the application side as a large vec or something and then ship that per request?#2020-07-3016:41cpdeanalso -- i would love to read anything you recommend about the perils of stored procedures! I've gone back and forth quite a bit during my career about relying on a database to process your data, but since i now sit firmly on the side of "process your data with a database", i don't feel like discounting them wholesale. but in any case, since datalog rules are more closely related to views than stored procs, i kinda want them to be stored in the database the way that table views can be defined in a database. but, i'd love to read anything you have about how that feature might be bad and if it's better to force clients to supply their table views.#2020-07-3017:13favilaphilosophically datomic is very much on the side of databases being “dumb” and loosely constrained and having smarts in an application layer. The stored-procedure-like features that exist are there mostly to manage concurrent updates safely, not to enforce business logic. (attribute predicates being a possible, late, narrow exception)#2020-07-3017:14favila(at least IMHO, I don’t speak for cognitect)#2020-07-3018:53cpdeanyeah i'm finding a lot of clever things about its ideas of the data layer -- like, most large scale data systems do well when they enshrine immutability. the fact that datomic does that probably resolves a lot of issues around concurrency/transaction management when you allow append-only accretion of data and have applications know at what point in time a fact was true#2020-07-3019:08cpdeanit'd be nice to see if my guess is accurate in the reason for not storing datalog rules in the database, but maybe by keeping rules and complicated businesslogic they could implement out of the database means you avoid problems where a change to a rule would break a client that's old versus a newer client that expects the change. tracing data provenance when the definition of a view is allowed to change makes things difficult to reason about or trace where a number is coming from. By forcing the responsibility of interpretation on the client, it allows clients to manage the complicated parts and keep the extremely boring fact-persistence/data-observations in one place#2020-07-3016:52mafcocincoI have added a composite tuple to my schema in Datomic marked it as unique to provide a composite unique constraint on the data. The :db.cardinality is set to :db.cardinality/one and the :db/unique is set to db.unique/identity. When a unique constraint is set to db.unique/identity on a single attribute, if a transaction is executed against an existing entity, upsert will be enabled as described https://docs.datomic.com/cloud/schema/schema-reference.html#db-unique-identity. I would have expected the behavior to be the same for a composite unique constraint, provided the :db/unique was set to :db.unique/identity. However, that does not appear to be the case as when I try to commit a transaction against an entity that already exists with the specified composite unique constraint, a unique conflict exception is thrown. AFAIK, this is what would happen in the single attribute example if the :db/unique was set to :db.unique/value. Am I missing something or misunderstanding how things are working? I’m new to Datomic and I’m assuming this is just a misunderstanding on my part.#2020-07-3017:05favilaResolving tempids to entity ids occurs before adjusting composite indexes, so by the time the composite tuple datom is added to the datom set the transaction processor has already decided on the entity id for that datom#2020-07-3017:06favilaTo get the behavior you want, you would need to reassert the composite value and its components explicitly every time you updated them#2020-07-3017:07favilaThe reason it’s like this is because there’s a circular dependency: to know what the composite tuple should be to update, it needs to know the entity to get its component values to compute the tuple, but to know there’s a conflict it needs to know the tuple value#2020-07-3017:26mafcocincoah, that makes sense. It is relatively trivial to handle the exception and, in the application I’m working on, it is perfectly acceptable to just return an error indicating that the entity already exists. Any individual attributes on the entity that need to be updated can be done as separate operations.#2020-07-3017:26mafcocincoThanks for the explanation.#2020-07-3017:31favilaIf that’s the case, consider using only :db.unique/value instead of identity to avoid possibly surprising upserting in the future.#2020-07-3017:32mafcocincoJust so I’m clear, that is under the assumption that the behavior we discussed above changes such that upserting works with composite unique constraints?#2020-07-3017:32mafcocincoThat makes sense to me, just want to make sure I’m understanding correctly.#2020-07-3017:35favilaI guess that’s possible, but I just mean :db.unique/identity is IMHO a footgun in general#2020-07-3017:35favilaif you don’t need upserting, don’t turn it on#2020-07-3017:57mafcocincogotcha. thanks.#2020-07-3017:57kschltzHi there. I was looking for a more straightforward doc on how to scale up my primary group nodes for my datomic cloud production topology, any of you guys could help me on that?#2020-07-3018:25marshall@schultzkaue do you mean make your instance(s) larger or add more of them?#2020-07-3018:25kschltzI wanted more nodes#2020-07-3018:26marshallhttps://docs.datomic.com/cloud/operation/howto.html#update-parameter
^ this is how you choose a larger instance size - change the instance type parameter
for increasing the # of nodes:
https://docs.datomic.com/cloud/operation/scaling.html#database-scaling
Edit the AutoScaling Group for you primary compute group, set it larger#2020-07-3018:27marshallsame approach as is used here: https://docs.datomic.com/cloud/tech-notes/turn-off.html#org7fdb7ff but you set it higher instead of setting it down to 0#2020-07-3018:27kschltzneat! Thank you#2020-07-3120:05hadilsHi there, I am using Datomic Cloud. I would like to compile the code in my CI pipeline before deploying it, to save time and money. Can anyone tell me how Datomic Cloud invokes the compiler, and if it's reproducible?#2020-07-3120:37stuarthallowayHi @hadilsabbagh18. Are you writing ion code that runs inside a cluster node?#2020-07-3120:37hadilsYes sir.#2020-07-3120:39stuarthallowayIf you compile your code before deploying it to an ion, it will load into the cluster node faster, but I am not sure that will save you a visible amount of time or money.#2020-07-3120:40hadils@stuarthalloway I have deployed code that has had Java compiler errors, which costs time and money. I am just try to pre-compile the code to make sure that it will pass.#2020-07-3120:41stuarthallowayDo you mean Clojure compiler errors? The cluster node does not compile Java for you.#2020-07-3120:41hadilsYes, I mean Clojure compiler errors...#2020-07-3120:41stuarthallowayYou have some options:#2020-07-3120:43stuarthallowayIf you are already going to the trouble of running the compiler locally, then you deploy make a jar with the compiled code instead of with source. Then there is no compilation on the cluster node, and no possibility of (that class of) error.#2020-07-3120:44stuarthallowayIn that case the cluster node will also start faster after an ion deploy, although the difference may not matter much.#2020-07-3120:45hadilsHow would I indicate to the Ion deployment that I have already compiled my code into a jar? I can figure our that part...#2020-07-3120:45stuarthallowayGood news: you don't have to.#2020-07-3120:45stuarthallowayJars are jars are jars#2020-07-3120:46hadilsok. So I just declare :gen-class in my code and compile ir?#2020-07-3120:46stuarthallowayHave your ion depend on your compiled code as a maven dep.#2020-07-3120:46hadilsOk. Understood.#2020-07-3120:47stuarthallowayYou definitely do not need gen-class#2020-07-3120:47hadilsIn deps.edn rifht?#2020-07-3120:47stuarthallowayright#2020-07-3120:47stuarthallowayThis leads to a two-project structure, where your code is in one project, and your ion has deps on that code and probably just ion-config.edn.#2020-07-3120:48hadilsAha! Interesting idea!#2020-07-3120:48stuarthallowayI do this all the time. As soon as code is nontrivial I want to use it from more than one ion.#2020-07-3120:49stuarthallowayTo get the compilation benefit, you still need to do whatever maven/leiningen/boot magic you need to compile all your Clojure code in the code project.#2020-07-3120:50hadilsCan I use maven with tools.deps.alpha?#2020-07-3120:51stuarthallowayFor some definitions of "use", yes 🙂#2020-07-3120:53stuarthallowayThis space is evolving https://github.com/clojure/tools.deps.alpha/wiki/Tools#packaging#2020-07-3120:58hadilsI have found @seancorfield's depstar repo. I will use that. Thanks for your help @stuarthalloway!#2020-07-3121:22Nassinis the dev-local client compatible with on-premise client? (ignoring the features that on-premise support that cloud doesn't)#2020-07-3121:32cpdeanWhat's the idiomatic way to model something like a link table but against multiple other entities? in old datalog/prolog you'd do something like attrName(entity1, other1, other2, other3). assuming entity1, other1, etc are either scalar values or entity ids.
but in datomic's datalog, if vecs are allowed as a value in a datom, you might be able to do something like this
[entity1 :attrName [other1, other2, other3]]
or if not, you could... maybe this is how you'd do it?
[entity1 :attrName1 other1]
[entity1 :attrName2 other2]
[entity1 :attrName3 other3]
the fact attrName is meant to be something that must join entity1 with 3 other entities, rather than it representing an unordered collection of linked entities, like the :movie/cast attr in http://learndatalogtoday.org#2020-07-3121:38Nassindo all :attrName* express the same relation?#2020-07-3121:40cpdeanyeah. maybe i should have come up with a better concrete example for this...#2020-07-3121:41cpdeanboughtHouse(buyer, seller, house, notary). maybe? i don't actually know how houses are sold haha#2020-07-3121:43favilawhy is this different from having separate ref attributes? each assertion has a different meaning#2020-07-3121:43cpdeanmaybe the orientation of what an entity is can be reversed?
[house-sale-eid :housesale/buyer buyer-eid]
[house-sale-eid :housesale/seller seller-eid]
[house-sale-eid :housesale/house house-eid]
[house-sale-eid :housesale/notary notary-eid]
#2020-07-3121:43favila^^ this is what I would expect#2020-07-3121:44cpdeani don't know if it's different - i'm totally new to this and only have a background in dimensional modelling, datavault, and datalog#2020-07-3121:44favilaI think you’re getting at something though. Is it maybe a constraint you’re trying to enforce?#2020-07-3121:45cpdeani definitely know that i want some constraints to be enforced, but i don't know what the term means in datomic's context yet 😬#2020-07-3121:51cpdeanyeah i guess orienting the entity around the event and not the buyer, or whatever the 'primary subject' of the event is is how you'd avoid having more than one instance of an entity for a given field#2020-08-0114:50alidlorenzoWhat exactly counts as an entity in Datomic? Is it whatever datoms are transacted together as part of a single transaction form?
I’m asking bc I’m pondering what would be the best way to model a belongs-to relationships of different namespaced datoms that are always created together?
e.g. an account and a user
should they be transacted* together as a single db entity?
{:tx-data [{:account/username "admin"
:user/email "
or would it be better to make them separate db entities and give one a db ref to the other?
{:tx-data [{:account/username "admin"}
{:user/email "
i imagine making them a single db entity and adding a db ref would redundant since the datom would be referencing its own db id
{:tx-data [{:account/username "admin"
:user/email "#2020-08-0300:56hadilsI read somewhere that we should never use the Synchronous Client API in Production. Does anyone have experience with using Async in Production?#2020-08-0300:56hadilsPerhaps they can share some insights. I am using Datomic Cloud...#2020-08-0301:18Joe Lane@hadilsabbagh18 I've never heard that before and disagree with "never". I've almost exclusively used the Synchronous api.#2020-08-0301:19hadilsThanks @lanejo01. Congratulations on joining Cognitect/Nubank!#2020-08-0301:19Joe LaneThanks!#2020-08-0308:22plexusFor folks who are interested in Datalog databases in general please come and hang out with us over at #datalog#2020-08-0407:47robert-stuttaford@jaret @marshall what does it mean if i can see a datom in a d/db but not in a d/history of that same db?#2020-08-0412:13jaret@robert-stuttaford any chance the attribute has :db/noHistory set to true?#2020-08-0412:48jaret@robert-stuttaford second thought, you’re getting the history db from the db you see the datom in? If so, that sounds like something we would want to investigate. Would you be able to give us a small repro or better yet, a backup that shows this behavior?#2020-08-0413:26robert-stuttafordthat's right - db and (d/history db)#2020-08-0413:27robert-stuttaford@jaret it's in our prod db, which has all our PII in it, started circa 2012 🙂#2020-08-0413:27robert-stuttafordperhaps we could arrange a zoom and i could show you via screen share, and then we can see about next steps from there?#2020-08-0414:51Lennart BuitHey there, we are using datomic and are currently diagnosing a performance issue related to a recursive rule. We have a tree structure in datomic, that for each node, links a parent (or not), so a schema like this:
(def schema
[;; Additional tree attributes omitted
{:db/ident :node/parent
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}])
Now, we sometimes look through the tree to find, say a descendant of a node. To do so, we have a recursive descendants-of? rule that binds this collection of descendants to ?node:
(def descendants-of?
'[[(descendants-of? ?root ?node)
[?node :node/parent ?root]]
[(descendants-of? ?root ?node)
[?node :node/parent ?intermediate]
(descendants-of? ?root ?intermediate)]])
So far so good, we can do queries that reason about descendants, for example finding all descendants for a root:
(d/q '[:find ?node
:in $ ?e %
:where
(descendants-of? ?e ?node)]
(d/db (conn))
root-eid
descendants-of?])
Now, sometimes we have a candidate set of nodes, and of those candidates, we need to find the descendants, say like this:
(d/q '[:find ?node
:in $ ?name %
:where
;; assuming that `?name` only exists in one tree
[?e :node/name ?name]
(descendants-of? ?e ?node)]
(d/db (conn))
“name”
descendants-of?])
In the most pathological case, where all nodes are named ?name, we will be binding all nodes in a tree to ?e, and then find the descendants of those ?e s and bind those to ?node. The result will be the same as the query above: all nodes but the root.
However, this query appears to be much slower. I think that makes sense intuitively if we assume that descendants-of? is kinda expanded per ?e. For each member of ?e, we can potentially redo descendant seeking, if those descendants are also in ?e.
Is there a way to optimise here, if there are potential queries that bind descendants and ancestors to ?e ?#2020-08-0418:18favilaWhy not:
'[[(descendants-of? ?root ?node)
[?node :node/parent ?root]]
[(descendants-of? ?root ?node)
[?intermediate :node/parent ?root]
(descendants-of? ?intermediate ?node)]]
?#2020-08-0418:19favilaThe second implementation of decendents-of? necessarily scans all :node/parent if ?node is unbound#2020-08-0418:19favilain fact, datomic has a syntax for ensuring a rule input var is bound:#2020-08-0418:20favila'[[(descendants-of? [?root] ?node)
[?node :node/parent ?root]]
[(descendants-of? [?root] ?node)
[?intermediate :node/parent ?root]
(descendants-of? ?intermediate ?node)]]
#2020-08-0418:46Lennart BuitAh let me take a look, I may have simplified the example a bit too much when lifting it from our codebase#2020-08-0419:11Lennart BuitOh you are on to something! Thank you so much#2020-08-0418:03kennyUsing :as with :db/id does not appear to have any effect. Is this expected?
(d/q
'[:find (pull ?e [(:db/id :as "foo")])
:where
[?e :db/ident :db/cardinality]]
(d/db conn))
=> [[#:db{:id 41, :ident :db/cardinality}]]#2020-08-0419:05kennyOpened a support ticket for this https://support.cognitect.com/hc/en-us/requests/2797#2020-08-0418:04kennyOther db/* attrs work:
(d/pull (d/db conn)
'[(:db/id :as "foo")
(:db/ident :as "ident")]
:db/cardinality)
=> {:db/id 41, :db/ident :db/cardinality, "ident" :db/cardinality}#2020-08-0418:05kennyThis one is strange since :db/ident is included twice.#2020-08-0418:05kenny:db/doc is not included twice.
(d/pull (d/db conn)
'[(:db/id :as "foo")
(:db/ident :as "ident")
(:db/doc :as "doc")]
:db/cardinality)
=>
{:db/id 41,
:db/ident :db/cardinality,
"ident" :db/cardinality,
"doc" "Property of an attribute. Two possible values: :db.cardinality/one for single-valued attributes, and :db.cardinality/many for many-valued attributes. Defaults to :db.cardinality/one."}#2020-08-0418:14souenzzo@kenny :db/id isn't an attribute. where is no Datom[e :db/id v]. It's a "special thing" that d/pull assoc into it's response.#2020-08-0418:15kennySo? I don't think pull requires the selection to be attributes. From the doc "Pull is a declarative way to make hierarchical (and possibly nested) selections of information about entities."#2020-08-0418:17kennyEven if that was a requirement, I don't think it makes sense for it to behave differently than everything else.#2020-08-0418:17souenzzoI agree that it's a bug#2020-08-0508:42thumbnailHi, I'm trying to pull an entity and pull 1 attribute of it's recursive parents.
'[:node/name, :node/bunch-of-attrs, :node/other-attr, {:node/parent ...}] works, but pulls all attrs every parent. i want to limit that to just name . Any way to achieve that?#2020-08-0512:05favilaDon’t know for sure but try this [,,, {:node/parent [:node/name {:node/parent ...}]}]#2020-08-0512:14thumbnailThat worked! Thanks.#2020-08-0513:59wsbfgDoes anyone know of a good comparison between the various on-prem datomic stores? We're thinking about a new project based on datomic and I was wondering if there exists a discussion of the pros and cons? I think our options would be (1) Postgres (2) Cassandra (3) Some other SQL (4) datomic cloud with the caveat that we're running in google cloud so would require data to cross clouds which may be an issue for dataomic (I can't find an opinion on this either).#2020-08-0514:12favilaon-prem vs cloud is the big difference. Within on-prem, I think you should just use whatever storage you are most familiar with#2020-08-0514:14faviladatomic on-prem uses storage as a key-value store for smallish (<60kb) binary blobs. I doubt any of the options have a clear advantage#2020-08-0514:15favilaand you’ll run anything serious with memcached or valcache#2020-08-0515:02ziltiIf you are on Google Cloud you can use their managed PostgreSQL as a backend for Datomic#2020-08-0515:23favilaI can confirm the managed mysql also works fine (with some schema tweaks: https://gist.github.com/favila/ecdcd6c4269ff2bd1bb3)#2020-08-0515:25wsbfgThanks all - we do make use of managed postgres at the moment which would make it an obvious choice. Although google does require occasional downtime for updates which is a shame.
Sounds like the choice of data store isn't critical then. That's good to know.#2020-08-0516:07ghadiI wouldn't do cross cloud AWS <> GCP without evaluating ingress/egress costs, or latency#2020-08-0610:54wsbfgYeah that's the big question with that approach. Although we could potentially host our read clients inside AWS meaning that only the writes would originate in GCP which seems likely to work for our usecase. Not ideal to have to run a service away from our others but we need to weigh that up against running a database. Usually running services is easier than databases!#2020-08-0514:07robert-stuttafordjust to forewarn that datomic cloud is a totally different system to on-prem, you'll architect differently and use completely different libraries#2020-08-0520:45arohnerHow do you architect differently?#2020-08-0515:26wsbfgThat's interesting - I'd not really appreciated that. I've found a good link on on-prem vs cloud. Thanks!#2020-08-0517:27kschltzHi there, does datomic cloud have any restrictions regarding AWS regions?#2020-08-0517:35marshallYes, Datomic Cloud is only available in certain regions
the current list is:#2020-08-0517:46kschltzThanks#2020-08-0617:31onetomthe hong kong region would be a welcome addition to that list.
i have the gut feeling that it supports all the required aws features already.#2020-08-0518:40marciolAnyone here is doing serious business with Datomic Cloud without the 24x7 support? The contracted support can be pretty expensive and I notice this is a matter of risk management. We run an application from 9 months and we had problems only when we needed a hands-on help to carry a major migration. It’d be nice to heard about other experiences.#2020-08-0518:58dregreHi folks —
Any tips on how to best approach writing tests for Datomic rules?
My app makes extensive use of rules and I would like to button up the testing.
Much obliged.#2020-08-0617:30onetomim also just about to explore this topic in our current project.
what's your current approach?#2020-08-0619:35dregreMy approach has been to use an in-memory database loaded with the right schema and mock data (fixtures) and then run queries against them — but I wonder if there’s a better approach.#2020-08-0619:37dregreI’m also interested in finding a query profiler or explainer, if anyone’s come across any.#2020-08-0617:39onetomi've just started to use the client api recently (after a few years of dealing only with the peer api).
when i tried to test something simple, i was getting this error:
Execution error (ExceptionInfo) at datomic.client.api.impl/incorrect (impl.clj:43).
Query args must include a database
on a closer look, i can provoke this error by trying to run some of the official examples from the cloud docs (https://docs.datomic.com/cloud/query/query-data-reference.html#calling-static-methods), against a com.datomic/client-pro 0.9.63 backed by a datomic-pro peer-server & transactor 1.0.6165 (and i also have a com.datomic/dev-local 0.9.183 loaded into the same process too):
(dc/q '[:find ?k ?v
:where [(System/getProperties) [[?k ?v]]]])
or this even simpler query:
(dc/q '[:find [?b]
:in [?tup]
:where [[(untuple ?tup) [?a ?b]]]]
[1 2])
is that expected or a bug?#2020-08-0617:41onetomfor the 1st case, where the :in clause is omitted, i might understand the error, but for the second case, i definitely don't expect it#2020-08-0617:41marshallclient query must take a db, can’t be run against a collection the way peer query can#2020-08-0617:42onetomso it's a "bug" in the documentation then, isn't it?#2020-08-0617:42marshalli can look at the docs, if they are client-specific docs then yes#2020-08-0617:43onetomthe cloud docs - i linked - is all client api specific, no?#2020-08-0617:43marshallyes, that example should take a db#2020-08-0617:44onetombut now that we are talking about it, of course it needs a "db", since that's how the query itself can reach the query engine over the network...#2020-08-0617:45marshallright 🙂#2020-08-0617:41marshall@onetom ^#2020-08-0617:48onetomi keep finding myself reaching for the peer api functions, like entid, ident etc.
the https://docs.datomic.com/on-prem/clients-and-peers.html#peer-only section of the docs makes it clear that we should use the pull api instead of these functions, but that's just much more verbose. same issue with the lack of the find-coll and find-scalar from find-spec.
is there any official or popular compatibility lib which fills this gap?#2020-08-0617:49onetomor is there any good examples how to sturcture an app in a way that it's concise to test?#2020-08-0617:52onetomfor example, given this function:
(defn by-name
[db merchant-name]
(-> {:query '{:find [?merchant]
:in [$ ?merchant-name]
:where [(or [?merchant :merchant/name ?merchant-name]
[?merchant :merchant/name-en ?merchant-name])]}
:args [db merchant-name]}
(dc/q)
(ffirst)))
my test would look like this:
(deftest by-name-test
(testing "exact match"
(let [db (db-of [{:db/ident :some-merchant
:merchant/name "<merchant name in any language>"}])]
(is (match?
(->> :some-merchant (dc/pull db [:db/id]) :db/id)
(merchant/by-name db "<merchant name in any language>"))))))
where db-of is just a with-db with some schema, made from a dev-local test db.
that (dc/pull db [:db/id]) :db/id is the annoying part and it's even more annoying if im expecting multiple values.#2020-08-0617:54onetomthe benefit of operating with idents is that the test failure messages are symbolic and i don't have to muck around with destructuring string temp-ids, potentially across multiple transactions#2020-08-0618:02onetomi can understand that the client api doesn't want to provide find-scalar and find-coll and ident, entid, so the interface size is small, which helps providing alternative implementations, like the dev-local one, but these functions are just too useful for REPL work and automated tests.
i can also understand how they might seep into application code, promoting inefficient code, but that's not a strong reason for not providing them officially.#2020-08-0619:07onetomfor now, i made a custom matcher, which results in tests like this:
(is (match?
(idents-in db
:matching-merchant-1
:matching-merchant-2)
(merchant/named-like db "matching")))
where idents-in looks like this:
(ns ...
(:require
[matcher-combinators.core :refer [Matcher]] ...))
(defrecord MatchIdents [db expected-idents]
Matcher
(-matcher-for [this] this)
(-matcher-for [this _] this)
(-match [_this actual-entity-refs]
(if-let [issue (#'matcher-combinators.core/validate-input
expected-idents
actual-entity-refs
sequential? 'in-any-order "sequential")]
issue
(#'matcher-combinators.core/match-any-order
expected-idents
(mapv (comp :db/ident (partial dc/pull db [:db/ident]))
actual-entity-refs)
false))))
(defn idents-in [db & entity-idents]
(->MatchIdents db entity-idents))
#2020-08-0623:37kschltzI have an incident where datomic cloud started giving me busy indexing errors#2020-08-0623:38kschltzI supposed I exceeded my node limit, the stop transacting entirely#2020-08-0623:38kschltzbut after 40min it still gives me busy indexing errors#2020-08-0700:07marciol@U05120CBV can you help us with some tip!#2020-08-0700:09marciol@U0CJ19XAM or @U09R86PA4 you have some tip about this kind of issue?#2020-08-0715:02favilaSorry, no, I don’t have any production experience with cloud#2020-08-0719:29marciolYes, in the end we restarted all machines from the primary computational group.#2020-08-0719:58kschltzas it turns out we largely exceeded our nodes capacity believing it would scale automatically#2020-08-0717:50jarethttps://forum.datomic.com/t/datomic-cloud-704-8957/1571#2020-08-0717:50jarethttps://forum.datomic.com/t/ion-dev-0-9-276-and-ion-0-9-48/1572#2020-08-0717:50jarethttps://forum.datomic.com/t/datomic-1-0-6202-now-available/1570#2020-08-0720:58Jake ShelbyI tried upgrading my ion dep to the latest above, was having trouble with a datomic/java-io dep, anybody else having this issue, am I missing something? (simple example):
~/ion-test-0.9.48
▶ cat deps.edn
{:deps {com.datomic/ion {:mvn/version "0.9.43"}}
:mvn/repos {"datomic-cloud" {:url ""}}}
~/ion-test-0.9.48
▶ clojure -Srepro
Clojure 1.10.1
user=> ^C%
~/ion-test-0.9.48
▶ cat deps.edn
{:deps {com.datomic/ion {:mvn/version "0.9.48"}}
:mvn/repos {"datomic-cloud" {:url ""}}}
~/ion-test-0.9.48
▶ clojure -Srepro
Downloading: com/datomic/java-io/0.1.19/java-io-0.1.19.pom from datomic-cloud
Downloading: com/datomic/java-io/0.1.19/java-io-0.1.19.jar from datomic-cloud
Error building classpath. Could not find artifact com.datomic:java-io:jar:0.1.19 in central ()#2020-08-0804:58mafcocincoRandom newb question: If an entity is retracted, are all references to that entity automatically retracted as well or are those retractions explicitly required?#2020-08-0814:18potetm> It retracts all the attribute values where the given id is either the entity or value, effectively retracting the entity’s own data and any references to the entity as well.
https://docs.datomic.com/on-prem/transactions.html#dbfn-retractentity#2020-08-0814:18potetm> It retracts all the attribute values where the given id is either the entity or value, effectively retracting the entity’s own data and any references to the entity as well.
https://docs.datomic.com/on-prem/transactions.html#dbfn-retractentity#2020-08-0814:19potetmIt’s just a shorthand for retracting all facts involving that entity.#2020-08-0920:18Drew VerleeWhen is it appropriate to namespace/qualify an datomic datom attribute instead of having it un namespaced/unqualified? E.g ( entity human/name drew) vs (entity race human)(entity name drew).
I would say it depends on if you ever need to query those attributes separately.#2020-08-0920:18Drew VerleeWhen is it appropriate to namespace/qualify an datomic datom attribute instead of having it un namespaced/unqualified? E.g ( entity human/name drew) vs (entity race human)(entity name drew).
I would say it depends on if you ever need to query those attributes separately.#2020-08-1002:55Saurabh SharanHas anyone tried to build a real-time clojurescript (fulcro) app w/ Datomic? All I could find is https://medium.com/adstage-engineering/realtime-apps-with-om-next-and-datomic-470be2c8204b, but it uses tx-report-queue which isn't supported in Datomic Cloud.#2020-08-1002:55Saurabh SharanHas anyone tried to build a real-time clojurescript (fulcro) app w/ Datomic? All I could find is https://medium.com/adstage-engineering/realtime-apps-with-om-next-and-datomic-470be2c8204b, but it uses tx-report-queue which isn't supported in Datomic Cloud.#2020-08-1304:44Saurabh Sharan@U09R86PA4 Thanks for the explanation!#2020-08-1013:28Joe Lane@saurabh.sharan1 Can you be more specific about what you mean by "Real-Time"?#2020-08-1013:28Joe Lane@saurabh.sharan1 Can you be more specific about what you mean by "Real-Time"?#2020-08-1023:18hadilsHow do I list the dependencies used by Datomic Cloud? I want to fix my deps.edn...#2020-08-1023:29hadilsSpecifically I need to find the version of tools.deps.alpha that has the reader namespace.#2020-08-1023:34Alex Miller (Clojure team)The reader namespace was recently removed - the changelog is at https://github.com/clojure/tools.deps.alpha/blob/master/CHANGELOG.md #2020-08-1023:34Alex Miller (Clojure team)The reader ns was removed in 0.9.745#2020-08-1023:34Alex Miller (Clojure team)Prior was 0.8.709#2020-08-1023:42hadilsThanks @alexmiller!#2020-08-1113:05simongrayHow would you generally model something like inheritance of attributes from entities in an is_a relationship using datalog, e.g. modelling the class hierarchy found in OOP? The naïve solution would be to query for one entity’s parent and then fetch the parent’s attributes, repeating the process by looping through each entity’s parent entity and collecting any attributes found along the way until there is no parent. I was wondering if there is datalog pattern to do this in a single query - or if I need to run many successive queries using Clojure instead?#2020-08-1113:25Joe Lane@simongray https://github.com/cognitect-labs/onto may be a good reference for you. #2020-08-1113:25Joe Lane@simongray https://github.com/cognitect-labs/onto may be a good reference for you. #2020-08-1113:42arohnerI have a transaction that I want to commit iff an existing value hasn’t changed. i.e. [:db.cas eid ::foo bar bar] Is db.cas the best way to do that, or is there a better way?#2020-08-1113:50dmarjenburgh@jake.shelby I have the same problem. It can't download com/datomic/java-io/0.1.19/java-io-0.1.19.pom from datomic-cloud. Problem occurs when trying to upgrade to com.datomic/ion {:mvn/version "0.9.48"}#2020-08-1114:29marshall@jake.shelby @dmarjenburgh - We've released that missing dep; sorry about that and thanks for catching it#2020-08-1114:29marshall@jake.shelby @dmarjenburgh - We've released that missing dep; sorry about that and thanks for catching it#2020-08-1116:46Jake ShelbyAwesome thanks, working for me now:
~/ion-test-0.9.48 ⍉
▶ clojure
Downloading: com/datomic/java-io/0.1.19/java-io-0.1.19.pom from datomic-cloud
Downloading: com/datomic/java-io/0.1.19/java-io-0.1.19.jar from datomic-cloud
Clojure 1.10.1
user=>
#2020-08-1116:46marshall:+1:#2020-08-1121:25JoshHey there, I was testing what error I would get when I hit the https://docs.datomic.com/cloud/schema/schema-reference.html#:~:text=Strings%20are%20limited%20to%204096%20characters. (4096 chars) when I was surprised to find that transacting strings larger than 4096 characters does not result in an error.
Is the Datomic cloud string size limit a soft limit? If so what other problems could I run into by storing strings larger than 4096 characters?
Here’s a sample of the code I’m running
(ns user
(:require
[datomic.client.api :as d]))
(def db-name "test")
(def get-client
"Return a shared client. Set datomic/ion/starter/config.edn resource
before calling this function."
#(d/client {:server-type :ion
:region "us-west-2"
:system "<system>"
:endpoint "<endpoint>"
:proxy-port 8182}))
(defn get-conn
"Get shared connection."
[]
(d/connect (get-client) {:db-name db-name}))
(defn get-db
"Returns current db value from shared connection."
[]
(d/db (get-conn)))
(def schema
[{:db/ident :string
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}])
(comment
(d/create-database (get-client) {:db-name db-name})
(d/transact (get-conn) {:tx-data schema})
;; test string limit
(let [s (apply str (repeat 10000 "a"))
tx-report (d/transact (get-conn)
{:tx-data [{:db/id "tempid"
:string s}]})
id (get-in tx-report [:tempids "tempid"])
stored-s (:string (d/pull (get-db) '[*] id))]
(println "s length in: " (count s))
(println "s length out: " (count stored-s))
(println "equal? " (= s stored-s)))
;; =>
;; s length in: 10000
;; s length out: 10000
;; equal? true#2020-08-1200:31Jake Shelbyinteresting … one thing I can think of that would still be limited is the index - are you still able to look up that entity using the value of that large string in a query?#2020-08-1201:17JoshIt seems so, this query returns the expected id
(d/q '[:find ?e
:in $ ?s
:where
[?e :string ?s]]
(get-db)
s)#2020-08-1213:15souenzzoDatomic Team
There is a issue/tracker for this issue?
(let [db-uri (doto (str "datomic:mem://" (d/squuid))
d/create-database)
conn (d/connect db-uri)
db (d/db conn)]
(first (d/index-pull db
{:index :avet
:start [:db/txInstant]
:reverse true
:selector '[(:db/id :as :not-id :xform clojure.edn/read-string)]})))
selector :as do not work on :db/id, in any context (pull, query-pull, index-pull)...
PS: :db/id do not respect ANY param. xform also not work with it.#2020-08-1300:30kennyI opened a support ticket for this as well. I suggest doing the same so they know there’s interest in getting this fixed. #2020-08-1220:36Sam DeSotaI'm trying to transact a :db/fn via a custom client that works from javascript, but I'm struggling to get the schema right. I keep getting an error:
Value {:lang :clojure, :params [db eids pull-expr], :code "(map eids (fn [eid] (datomic.api/pull db pattern eid)))", :imports [], :requires []} is not a valid :datomic/fn for attribute :db/fn#2020-08-1220:41favilaYou should show your code, but it sounds like you are transactions a map instead of a d/function object#2020-08-1220:42favilaYou can make one either with d/function or the #db/fn reader literal#2020-08-1220:44Sam DeSotaYes, I’m transacting a map, I assumed that’s how transit serialized the d/function object. The issue is that I’m doing this from a javascript library that doesn’t have access to d/function, on the transit level how is this object serialized? (using this from javascript is obviously non-standard, but necessary for my org)#2020-08-1220:50favilaI missed that you were using a custom client#2020-08-1220:50favilaYou control all of this, so whatever you are doing is what you are doing :)#2020-08-1220:50favilaBut when you call d/transact, you need a function object#2020-08-1220:51favilaIt’s unclear to me how you made a d/function object from JavaScript?#2020-08-1223:55Sam DeSotaI ended up just shifting to putting the txfns in the datomic install classpath#2020-08-1220:37Sam DeSotaIt's not clear what's missing to correctly transact this function, using the on-prem obviously. Any ideas?#2020-08-1408:56ziltiHas someone here used Datomic with Algolia before, or with something similar? If so, are there any gotchas or something I should be aware of?#2020-08-1409:47categoryRe:
https://docs.datomic.com/cloud/getting-started/configure-access.html#authorize-user
https://docs.datomic.com/cloud/ions/ions-tutorial.html#org6699cd4
Please can you confirm whether or not administrator permissions are required for all applications to connect to datomic cloud?#2020-08-1416:48kennyWhen initially transacting your schema on a fresh database and using tuple attributes, do folks typically do 2 transactions -- one for the schema without tupleAttrs and one with the tupleAttrs?#2020-08-1416:51kennyWait, order in the transaction appears to matter! This transaction fails
(d/transact conn {:tx-data [#:db{:ident ::a+b,
:valueType :db.type/tuple,
:tupleAttrs [::a ::b],
:cardinality :db.cardinality/one,
:unique :db.unique/identity}
#:db{:ident ::a,
:valueType :db.type/string,
:cardinality :db.cardinality/one}
#:db{:ident ::b,
:valueType :db.type/string,
:cardinality :db.cardinality/one}]})
Execution error (ExceptionInfo) at datomic.core.error/raise (error.clj:55).
:db.error/invalid-install-attribute First error: :db.error/invalid-tuple-attrs
And this one succeeds.
(d/transact conn {:tx-data [#:db{:ident ::a,
:valueType :db.type/string,
:cardinality :db.cardinality/one}
#:db{:ident ::b,
:valueType :db.type/string,
:cardinality :db.cardinality/one}
#:db{:ident ::a+b,
:valueType :db.type/tuple,
:tupleAttrs [::a ::b],
:cardinality :db.cardinality/one,
:unique :db.unique/identity}]})
#2020-08-1721:48Jake Shelby[datomic cloud] Trying to update my development solo stack to the latest version. Following the instructions from the documentation (as I understand them), I click “update” on the nested CF stack for “compute” … however, I am presented with a warning:
> It is recommended to update through the root stack
> Updating a nested stack may result in an unstable state where the nested stack is out-of-sync with its root stack.
Is this something I should worry about, it’s not mentioned in the documentation (I’m I updating the wrong stack?)#2020-08-1722:01kennyDon't know the answer to your specific question but in the future I recommend always deploying storage and compute as separate stacks.#2020-08-1723:25marshall@U018P5YRB8U you should split your stack before you upgrade#2020-08-1723:25marshallyou should definitely NOT upgrade a nested stack#2020-08-1723:25marshallsee https://docs.datomic.com/cloud/operation/split-stacks.html#2020-08-1723:40Jake Shelbyokay thanks, it wasn’t clear to me if a solo stack needed to be split at all (because these docs seemed to be specifically for production deployments)#2020-08-1723:41marshallah, yes it should be#2020-08-1723:41marshalland we'll look at improving those docs#2020-08-1723:41Jake Shelbygreat, thanks for the responsiveness!#2020-08-1813:04mafcocincoWhat is an idiomatic way to express uniqueness within a set of attribute values in Datomic? That is, if I have an attribute that is of type db.type/ref and it is of db.cardinality/many, how do I enforce a uniqueness constraint on the set of values that is being referred to, in the context of the containing value?#2020-08-1813:04mafcocincoWhat is an idiomatic way to express uniqueness within a set of attribute values in Datomic? That is, if I have an attribute that is of type db.type/ref and it is of db.cardinality/many, how do I enforce a uniqueness constraint on the set of values that is being referred to, in the context of the containing value?#2020-08-1813:09favilaAre you saying that the uniqueness constraint is expressed among the entities referenced?#2020-08-1813:10favilaso normal ref uniqueness is not enough, and the referenced entities themselves don’t have inherent uniqueness#2020-08-1813:10favilawould an example be: A person may have many addresses, but only one may be a primary address#2020-09-2520:19csmyou don’t have a selector, you’d need (d/pull (d/db conn) '[*] id) to pull everything for id.#2020-09-2520:20csmClient api can also use an arg map: (d/pull db {:selector '[*] :eid id})#2020-09-2520:21Michael J Dorianthank you, that did it!#2020-09-2520:21manutter51Yeah that was it, I was looking up the docs:
datomic.api/pull
([db pattern eid])
Returns a hierarchical selection of attributes for eid.
See for more information.#2020-09-2520:21manutter51I was just working with pull expressions too, should have spotted that sooner.#2020-09-2520:22Michael J DorianSorry for the silly question, these docs have a lot of "..." that really throws me off. I'm curious why I didn't get an arity exception though#2020-09-2520:23manutter51Yeah, seems like you should have.#2020-09-2520:23Michael J DorianOh, I guess I could have included the selector and :eid all in a map. All makes sense now. Thanks everyone!#2020-09-2713:37nandoI'm trying to work out how to sort a collection of items nested within the data structure returned from a pull pattern, particularly a pull that uses a reverse lookup. Here's the pull pattern I'm working with:
[:db/id
{:batch/formula [:db/id :formula/name]}
:batch/doses
:batch/date
{:batch-item/_batch [:db/id
{:batch-item/nutrient [:nutrient/name
{:nutrient/category [:category/sort-order]}]}
:batch-item/weight
:batch-item/complete?]}]
The :batch-item/_batch bit returns a rather large collection and I want to sort it by :category/sort-order and :nutrient/name#2020-09-2713:39souenzzo@nando you can use #specter with something like (transform [(walker :batch-item/nutrient) :batch-item/nutrient] (partial sort-by :nutrient/category) (d/pull ...))#2020-09-2713:44nandoSo I would wrap the pull in a specter transform? With a query that returns a flat structure, I'd use
(sort-by (juxt :sort-order :nutrient-name)
(d/q ...
#2020-09-2714:22souenzzo#specter will help you to "find some sub-structure and transform it without change anything outside it'#2020-09-2714:24souenzzoonce you find what you want to transform (the second argument, know as 'path'. On the example: find a map with this key, and 'enter' this key)#2020-09-2714:24souenzzoin this case the transform function will not by sort-by :nutrient/category, but something like #(sort-by (fn [el] ((juxt ..) el)) %)#2020-09-2714:26souenzzothe path is [(walker :batch-item/nutrient) :batch-item/nutrient ALL]
TLDR; #datomic do not do anything about sorting#2020-09-2714:50nandoThanks @souenzzo , will look into #specter next.#2020-09-2718:25daniel.spanielDoes datomic query syntax allow for group-by? I wanted to group some datums by date and then count them.#2020-09-2719:00nandoI've been looking at the clojure core function group-by for this https://clojuredocs.org/clojure.core/group-by#2020-09-2719:02daniel.spanielHas it worked ? I will try it as well .. see what happens .. good idea#2020-09-2719:06nandoI haven't incorporated it into the app I'm working on, but it certainly worked in the REPL#2020-09-2719:20nando(defn group-nutrients-by-category
[v]
(group-by :category-name v))
I've got a datomic query that returns nutrients, and each of these have a category, such as Vitamins, Minerals, Amino Acids, Plant Extracts. I just tried the above, using (group-nutrients-by-category (find-all-nutrients)) and it worked perfectly, as expected.#2020-09-2719:27daniel.spanielright, that is doing group-by after the query .. i meant in the query itself ..#2020-09-2719:30nandoIt is my understanding that sorting and grouping is done with clojure functions rather than datalog query syntax.#2020-09-2719:31daniel.spanieli reckon so. there are some other aggregate function like max, min count, but not group-by or sort-by that are built in#2020-09-2719:32nandoHave you tried to sort the results of a query yet?#2020-09-2719:32daniel.spanieloh sure, tis easy#2020-09-2719:33Joe Lanehttps://docs.datomic.com/cloud/query/query-data-reference.html#aggregate-example#2020-09-2719:34nando^^^#2020-09-2719:35Joe LaneIs this not what you mean when you say group-by?#2020-09-2719:38Joe Lane@dansudol '[:find ?date (count ?e) :where [?e :entity/date ?date]]#2020-09-2719:39Joe LaneHave you looked at https://docs.datomic.com/cloud/query/query-data-reference.html#aggregate-example#2020-09-2719:41daniel.spanielyes, that is pretty close to the query i need Joe, interesting .. i guess if that does the same as group by ( i am reading the examples now ) then that does it .. i am going for something a big more complicated ( count by date range ) but if this works as grouping by date then i am super close to what i want#2020-09-2719:44Joe LaneAre the date ranges contiguous and non-overlapping?#2020-09-2719:44daniel.spanielyes#2020-09-2719:44daniel.spanielbeginning ->end of a month , so finding items whose dates are in that range and counting them up, where let's say the range is a year, so each month, wanted the count of the items ( that have date field on them )#2020-09-2719:46Joe LaneWhat datomic system are you using?#2020-09-2719:46daniel.spanielcloud#2020-09-2719:51nandoIf I'm understanding the difference correctly, using group-by will return all records, while using count in an aggregate query will return a single record for each date.#2020-09-2719:53daniel.spanielyou can't use group-by in the query though, just to operate on the returned data , but the last part is right i reckon#2020-09-2719:54daniel.spanieli guess the count by date is kinda grouping dates in a way so there is the element of group by there#2020-09-2719:57Joe Lane'[:find ?month (count ?e)
:where
[(java.time.ZoneId/of "UTC") ?UTC]
[?e :entity/date ?date]
[(.toInstant ^Date ?date) ?inst]
[(.atZone ^Instant ?inst ?UTC) ?inst-in-zone]
[(.getMonthValue ^ZonedDateTime ?inst-in-zone) ?month]
#2020-09-2719:58Joe LaneConsider the above a sketch, written in slack, untested, likely need to add a few things.#2020-09-2719:59daniel.spanielthat is pretty hillarious Joe, nifty idea , i will hack around it#2020-09-2720:03Joe LaneThe instant type in datomic is a java.util.Date, so if you want to use the nice .getMonthValue method you'll need some combination of that.
There are several other things you could do like make a custom query function to do all the gnarly time conversion stuff in an isolated way. https://docs.datomic.com/cloud/query/query-data-reference.html#deploying
Other than that time conversion stuff, this is a pretty trivial query, right?
It's basically:
'[:find ?month (count ?e)
:where
[?e :entity/date ?date]
[(my.ions/date->month ?date) ?month]]
#2020-09-2720:04Joe Lane(You might need to use :with ?month in that query, I'd have to think about it...)#2020-09-2720:04daniel.spanielpretty much, your idea is good .. me like#2020-09-2720:26daniel.spanielinteresting @lanejo01 .. this works, very nice ( i made my own database function as you suggested ) slick !#2020-09-2720:27Joe LaneGreat to hear! #2020-09-2806:38David PhamIs it possible to find the entity with the maximum of some attribute in datalog?#2020-09-2808:37Yuriy Zaytsev(d\q '{:find [(max ?attr)] :in [$] :where [[_ :some/attribute ?attr]]} db)#2020-09-2815:58David PhamHow do you get the entitie whose attribute is the maximum?#2020-09-2816:08Yuriy Zaytsev(d/q '{:find [?entity]
:in [$]
:where [[(datomic.client.api/q '{:find [(max ?attr)]
:in [$]
:where [[?entity :some/attribute ?attr]]} $) [[?attr]]]
[?entity :user-metric/elapsed-ms ?attr]]} db)
#2020-09-2816:42David PhamSo nested queries?#2020-09-2816:43Yuriy Zaytsevyes#2020-09-2822:20steveb8nQ: I have a very slow memory leak in a production Cloud system. Before I start dumping logs and digging around, I wonder if folks out there have any tricks/tips for this process. I’ll post the chart in the thread…..#2020-09-2822:22steveb8n#2020-09-2822:22steveb8nIn particular, I wonder why the indexer line goes up. And does that provide a clue about the leak?#2020-09-2912:15jaretHi @U0510KXTU, have you actually seen a node go OOM or are you just noticing this in your metrics/dashboard? This small snippet matches with the expectations I have for indexing. The indexing job occurs in the background. Indexing is done in memory and then the in-memory index is merged with the persistent index and a new persistent index is written to the storage service. If you widen the time scale you should see a saw tooth pattern on your indexing line.#2020-09-2922:20steveb8n@U1QJACBUM No I haven’t yet in prod but the same code running on Solo (test system) has gone OOM. That chart is 2 weeks, hence no saw tooth. Here’s the hour just gone. Saw tooth as expected#2020-09-2922:21steveb8nInteresting that you think this is normal. Is there some doc somewhere that describes what “normal” is for charts in the dashboard? That would help me (and others I suspect)#2020-09-2922:22steveb8nWhenever I deploy new code, the FreeMem line jumps back up to 10Mb and starts the slow decline#2020-09-2906:29armedHi. Is there any way to make custom transaction function to omit operaton (e.g. return nil instead of transaction operation)? I want omit insertion of data in some situations.#2020-09-2906:36armedI have permission entity with composite tuple (unique) on all three attributes. When I try to bulk insert list of permissions transation sometimes aborts with unique exception.
I want to make something like postgres's on conflict do nothing. Here is my transaction function, which not workig obviously.
(defn try-add-permission
[db {:keys [permission/app
permission/user
permission/role] :as perm}]
(if (d/q '[:find ?p .
:in $ ?app ?user ?role
:where
[?p :permission/app ?app]
[?p :permission/user ?user]
[?p :permission/role ?role]]
db app user role)
nil
perm))#2020-09-2906:36armedI have permission entity with composite tuple (unique) on all three attributes. When I try to bulk insert list of permissions transation sometimes aborts with unique exception.
I want to make something like postgres's on conflict do nothing. Here is my transaction function, which not workig obviously.
(defn try-add-permission
[db {:keys [permission/app
permission/user
permission/role] :as perm}]
(if (d/q '[:find ?p .
:in $ ?app ?user ?role
:where
[?p :permission/app ?app]
[?p :permission/user ?user]
[?p :permission/role ?role]]
db app user role)
nil
perm))#2020-09-2907:06tatuthow about returning empty vector instead of nil?#2020-09-2907:18armedI already tried that. Got error
{:status :failed,
:val #error{:cause "Cannot write #2020-09-2907:21armed@(d/transact (db/get-connection) [[auth.server.cas-sync/try-add-permission perm]])
#2020-09-2907:32tatutyou need to quote db fn name#2020-09-2907:35armedquoting does not help. And docs does not use quoting https://docs.datomic.com/on-prem/database-functions.html#using-transaction-functions#2020-09-2907:36tatutah, it’s on-prem, don’t know about that#2020-09-2908:28favilaThis is a quoting issue. The exception is related to serializing the function, which you can’t do. Your function is not being executed yet#2020-09-2908:31favilaIn fact your transaction data hasn’t left the peer. What is the error you get when you quote the function name?#2020-09-2909:25armedwhen I quote like this:#2020-09-2909:25armed@(d/transact
(db/get-connection) [['auth.server.cas-sync/try-add-permission perm]])
#2020-09-2909:26armedI get error: Could not locate auth/server/cas_sync__init.class, auth/server/cas_sync.clj or auth/server/cas_sync.cljc on classpath. Please check that namespaces with dashes use underscores in the Clojure file name.#2020-09-2909:32favilais this function installed on your DB?#2020-09-2909:33favilayou’ll note in your link you need to make classpath transaction functions available on the classpath of the transactor. This error looks like it can’t find the function#2020-09-2909:35favilaactually it can’t even find the namespace#2020-09-2910:44armed@U09R86PA4 thanks, It seems that I misunderstood how transaction functions work.#2020-09-2909:50tatutUpdating datomic cloud compute group to 2020/09/23 715-8973 release, the log shows that the compute nodes don’t seem to get up after upgrade… complaining that our application code has syntax error (which it shouldn’t as it worked in previous version)#2020-09-2909:51tatutproduction topology#2020-09-2909:53tatut"Msg": ":datomic.cloud.cluster-node/-main failed: Syntax error compiling at …
#2020-09-2909:54tatutit doesn’t seem to find a required .cljc file#2020-10-0107:53onetomhas this been solved yet?#2020-10-0111:33tatutyes, workaround with support… it seemd you can’t have paths that point to for example `“../common/src” (like we have sharing backend and frontend cljc code)#2020-10-0111:34tatutit worked in all previous versions, but it doesn’t anymore with the latest#2020-10-0111:34tatutworkaround with symlinks seems ok#2020-09-2911:49onetomWhen I connect a web ion via an API Gateway proxied thru a lambda, my ion function is supposed to receive a ring-compatible map as an argument (according to the official Ion docs).
However, the map I receive only contains :headers, :server-name and a :datomic.ion.edn.api-gateway/data and /json keys, so I can't just use the typical routing libs to build my web-app or http API, because those depend on the :request-method and :uri keys of the request map.
Is it a know issue?
Is it something related to the Lambda proxy data format version?
Is it just some kind of mis-configuration?#2020-09-2911:49onetomhere is an example request map I observed:
{:headers
{"accept-encoding" "gzip, deflate",
"content-length" "0",
"host" "",
"user-agent" "http-kit/2.0",
"x-amzn-trace-id" "Root=1-5f730b24-4ac64db84deabaf53c38af60",
"x-forwarded-for" "42.200.88.157",
"x-forwarded-port" "443",
"x-forwarded-proto" "https"},
:server-name "",
:datomic.ion.edn.api-gateway/json
"{\"version\":\"2.0\",\"routeKey\":\"$default\",\"rawPath\":\"/\",\"rawQueryString\":\"\",\"headers\":{\"accept-encoding\":\"gzip, deflate\",\"content-length\":\"0\",\"host\":\"\",\"user-agent\":\"http-kit/2.0\",\"x-amzn-trace-id\":\"Root=1-5f730b24-4ac64db84deabaf53c38af60\",\"x-forwarded-for\":\"42.200.88.157\",\"x-forwarded-port\":\"443\",\"x-forwarded-proto\":\"https\"},\"requestContext\":{\"accountId\":\"191560372108\",\"apiId\":\"8g759uq7nb\",\"domainName\":\"\",\"domainPrefix\":\"8g759uq7nb\",\"http\":{\"method\":\"GET\",\"path\":\"/\",\"protocol\":\"HTTP/1.1\",\"sourceIp\":\"42.200.88.157\",\"userAgent\":\"http-kit/2.0\"},\"requestId\":\"Tn6trha8yQ0EMGg=\",\"routeKey\":\"$default\",\"stage\":\"$default\",\"time\":\"29/Sep/2020:10:23:32 +0000\",\"timeEpoch\":1601375012276},\"isBase64Encoded\":false}",
:datomic.ion.edn.api-gateway/data
{:version "2.0",
:routeKey "$default",
:rawPath "/",
:rawQueryString "",
:headers
{:accept-encoding "gzip, deflate",
:content-length "0",
:host "",
:user-agent "http-kit/2.0",
:x-amzn-trace-id "Root=1-5f730b24-4ac64db84deabaf53c38af60",
:x-forwarded-for "42.200.88.157",
:x-forwarded-port "443",
:x-forwarded-proto "https"},
:requestContext
{:routeKey "$default",
:stage "$default",
:time "29/Sep/2020:10:23:32 +0000",
:domainPrefix "8g759uq7nb",
:requestId "Tn6trha8yQ0EMGg=",
:domainName "",
:http
{:method "GET",
:path "/datomic",
:protocol "HTTP/1.1",
:sourceIp "42.200.88.157",
:userAgent "http-kit/2.0"},
:accountId "191560372108",
:apiId "8g759uq7nb",
:timeEpoch 1601375012276},
:isBase64Encoded false},
}#2020-09-2911:52onetommy ion-config.edn looks like this:
{:allow [datomic.ion.starter.http/ionized-app]
:lambdas {:app
{:fn datomic.ion.starter.http/ionized-app
:integration :api-gateway/proxy
:description "return html app"}}
;:http-direct {:handler-fn datomic.ion.starter.http/return-something-json}
:app-name "kyt-dev"}#2020-09-2911:55onetomI'm using the Solo topology (the version, which was the latest last week), otherwise I wouldn't bother with lamdba gateways if I could use the production topology.#2020-09-2911:57onetomthe docs are mentioning these /json and /data keys in a note, but just in the table above the note, they are not namespaced:
https://docs.datomic.com/cloud/ions/ions-reference.html#web-ion#2020-09-2912:43Joe LaneHey @U086D6TBN , have a look at https://github.com/pedestal/pedestal.ions
And https://github.com/pedestal/pedestal-ions-sample
#2020-09-2915:27onetomthanks, I had a look, but I don't see how would it deal with my situation.
it does have a great example of a protocol which converts the response body into an input stream, which I still need, because the reitit.ring/create-resource-handler just returns a java.io.File as a :body and Datomic threw some ->>bbuff conversion error as a result.
for now, I just have a middleware to transform the above mentioned request map to be ring compatible:
(if-let [gw-data (:datomic.ion.edn.api-gateway/data gw-req)]
(-> gw-req
(assoc :uri (-> gw-data :requestContext :http :path))
(assoc :request-method (-> gw-data :requestContext :http :method)))
gw-req)#2020-09-2915:33Joe LaneI want to make sure I understand, did you call apigw/ionize on your ring handler function?
https://docs.datomic.com/cloud/ions/ions-tutorial.html#lambda-proxy#2020-09-2915:34onetomI have the suspicion that or ion-config.edn doesn't need the :integration :api-gateway/proxy option anymore if I use the newer style HTTP API gateway setup (as opposed to the RESTful API style), it's just hasn't been documented...#2020-09-2915:34onetomhttps://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-vs-rest.html#2020-09-2915:37onetom@U0CJ19XAM yes, that .../ionized-app is defined as (apigw/ionize app), where app is simply:
(fn [req]
{:status 200
:headers {"content-type" "text/plain"}
:body (with-out-str
(clojure.pprint/pprint req))})#2020-09-2915:37onetomthat's how I obtained the request map I showed above#2020-09-2915:38Joe LaneTo make sure I understand correctly, you're not using the supported integration. Is there still a problem if you use the supported one?#2020-09-2915:39onetomwhat do you mean by supported integration?#2020-09-2915:40Joe Lanehttps://clojurians.slack.com/archives/C03RZMDSH/p1601393646108500?thread_ts=1601380150.089100&cid=C03RZMDSH#2020-09-2915:41onetomI'm just realizing that probably the Datomic docs are taking about how to integrate a web ion with the traditional RESTful API gateway, not the new "HTTP API".
I'm using this "new style" gateway, because it supports JWT authorizer's out of the box, without the need to deploy a lamdba function just for that purpose.#2020-09-2915:43onetomyes, the mentioned request map was observed when my ion-config.edn contained that : integration :api-gateway/proxy setting#2020-09-2915:46onetommy API gw was created my this sample CF template though:
https://github.com/awsdocs/amazon-api-gateway-developer-guide/blob/master/cloudformation-templates/HTTP/http-with-jwt-auth.yaml
which I found in these AWS docs:
https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-jwt-authorizer.html#2020-09-2915:48Joe LaneI'm not sure that approach is compatible with the current mechanism for making a ring compatible handler. I think you're in uncharted territory, rife with undefined behavior.#2020-09-2915:49onetomProbably... Thanks looking into it!#2020-09-2915:51onetomI will probably just transition to the production topology, though I can foresee issues with the VPC link and NLB in that case, which I have just as little experience with as I have with Cognito and JWT authorizers :)#2020-09-2915:52Joe LaneYou will get a raw payload if you use http-direct#2020-09-2915:52Joe Laneit will not be a nice map like above.#2020-09-2915:53onetomAlternatively, I can write a replacement ionizer function for this new "HTTP API" gateway#2020-09-2915:54Joe LaneYou're free to do that, but we won't be able to support that if there are issues.#2020-09-3000:43jeff.terrell@U086D6TBN try switching the payload format version in the integration config for your API Gateway instance. Saying that based purely on memory so details may be off, but I’m pretty sure I got cut by this exact issue and that was the solution that I eventually found.#2020-09-3002:37onetom@U056QFNM5 thanks a lot for the advice. it worked indeed and I can even see how the JWT authorizer has decoded the token!#2020-09-3002:38onetomso i don't have to fall back to the old, RESTful-style API gateway creation#2020-09-3003:15jeff.terrellYou're very welcome. Also keep an eye out for setting cookies. I ran into an issue where the value of my set cookie header in my ring response map was a vector rather than a string. Apparently this is legal in ring, but it didn't work in an Ions context. Again, going from memory here, but I think that was right. A simple middleware to detect such values and only take the first value out of the vector worked.#2020-10-0710:28xcenoHi guys, just found this thread because I'm working on the exact same thing right now (trying to deploy an SPA as ion / lambda proxy)
I got confused by the mismatch between the Ion tutorial and the API Gateway console, so just to clarify once more:
What the Datomic Ion docs are talking about is now called REST API on AWS?
And the HTTP API is a new thing, that is not officially supported?#2020-10-0713:41jeff.terrellYes, REST is the old kind that the docs implicitly refer to. HTTP can work, but it's not what the docs describe specifically.#2020-10-0714:00xcenoUnderstood, thank you!#2020-09-2915:29Petrus TheronJust gotten bitten for an hour getting 401 Unauthorized for Datomic Pro due to missing XML schema in ~/.m2/settings.xml, which is not mentioned in https://my.datomic.com/account. Previously: https://clojurians-log.clojureverse.org/datomic/2019-01-30/1548890962.888000#2020-09-2915:29Petrus TheronJust gotten bitten for an hour getting 401 Unauthorized for Datomic Pro due to missing XML schema in ~/.m2/settings.xml, which is not mentioned in https://my.datomic.com/account. Previously: https://clojurians-log.clojureverse.org/datomic/2019-01-30/1548890962.888000#2020-09-2915:34jaretSorry about that, what is missing in https://my.datomic.com/account ? I see the .m2/settings.xml described as:
;; In ~/.m2/settings.xml:
<!-- ~/.m2/settings.xml (see the Maven server settings docs) -->
<servers>
…
<server>
<id></id>
<username>REDACTED</username>
<password>REDACTED</password>
</server>
…
</servers>
;; In deps.edn:
{:mvn/repos
{"" {:url ""}}
:deps
{com.datomic/datomic-pro {:mvn/version "${VERSION}"}}}#2020-09-2915:34Petrus Theron<?xml version="1.0" encoding="UTF-8"?>
<settings xmlns=""
xmlns:xsi=""
ssi:schemaLocation="">
#2020-09-2915:35jaretack! let me talk to Alex so i fully understand and I can update our http://my.datomic.com account to reflect that!#2020-09-2915:39Petrus TheronAlso - and this is probably out of scope - but even after fixing settings.xml (and running clj from terminal), IntelliJ Cursive still reported a 401 for deps because it caches the 401 error for a given deps.edn (not sure if this is due to Maven or IntelliJ). Fixed after reordering any two items under :deps, then I could start a REPL.#2020-10-0920:42tekacsI'm now debugging this 401 issue in my own case, where Datomic fails to download consistently on Github Actions for CI purposes but not locally on my machine.#2020-09-2918:52Michael J DorianHey! I have a transaction that always puts one entry into datomic, and I'd like to get it to return the entity id of the new entry.
I notice that the returned map contains :tx-data, which has the data I need. But I'm not sure how to read the contents of the returned datum, and, indeed, if this is considered bad best practices or not.
Help appreciated!#2020-09-2918:54ghadithe returned map also contains :tempids which is a map of tempid -> entity id#2020-09-2918:54ghadi@doby162#2020-09-2918:55Michael J DorianI'm getting an empty map on that one, do I just need to add a temp-id to the transaction?#2020-09-2918:57ghadipaste your code/input#2020-09-2918:58ghadiif your transaction included tempids, datomic returns the resolved ids after it transacts#2020-09-2919:01Michael J Dorian{:tx-data [#:user{:email "e", :password "q", :name "q", :token "q"}]} ; this query is generated by (make-record :user) and executed
(def q (make-record :user "q" "q" "q" "q"))
(:tempids q)#2020-09-2919:06Michael J DorianAh, ok! Just had to add :db/id "nonsense" to my query and now the map gives me {"nonsense" id} !#2020-09-2919:06Michael J Dorianthanks!#2020-09-2919:09ghadiif :user/email is a unique attribute in your database, you can use it to lookup entities without entity ids#2020-09-2919:11Michael J DorianOh, nice#2020-09-3007:21Ben SlessHi all, I have a silly question regarding the pricing model, maybe I'm just missing something:
Is the pricing only of instances running Datomic (transactor, etc), or for application instances using the client library as well?#2020-09-3013:26marshall@ben.sless Datomic Cloud presumably?
The pricing is only for the nodes/instances running Datomic Cloud software (the nodes started by the Cloudformation template)#2020-09-3013:32Ben Slessalright, then no charge for the number of clients, only for instances running Datomic itself.
What about on-prem?#2020-10-0107:59onetom@ben.sless i think this page answers that question well:
https://www.datomic.com/get-datomic.html
> All Datomic On-Prem licenses are perpetual and include all features:
>
> • Unlimited Peers and/or Clients#2020-09-3013:43ziltiI've seen datomic.api/entity. How is it supposed to work? I give it the db plus a :db/id and it is then supposed to give me a map with all attributes? Or how do I use it? There is only documentation for the Java version, not the Clojure one.#2020-09-3013:45ziltiThe immediate result is a map with the key :db/id and nothing else#2020-09-3013:47marshalldocumentation for entity: https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/entity#2020-09-3013:48marshallseveral day of datomic examples use the entity API:
https://github.com/Datomic/day-of-datomic/blob/a5f7f0bd084a62df7cc58b9a1c6fe7f8340f9b23/tutorial/hello_world.clj
https://github.com/Datomic/day-of-datomic/blob/a5f7f0bd084a62df7cc58b9a1c6fe7f8340f9b23/tutorial/data_functions.clj#2020-09-3013:49marshallhowever, you should also familiarize yourself with the pull API, as it provides much of the same functionality as entity (some differences), but is available in both peer and client#2020-09-3014:24ziltiThanks, ah, so it only fetches the key names when explicitly asked! Well, I don't think I'll ever use the client library, but I have looked at pull as well (and use it regularly inside a query)#2020-10-0108:01onetomwhy do u think u would never use the client library?#2020-10-0109:52ziltiWe're using Datomic on-prem and only use Clojure, so there's simply no reason to#2020-09-3017:52jI'm running the free version datomic via https://github.com/fulcrologic/fulcro-rad-demo. How do I interface with datamic to peek inside it?#2020-10-0110:55joshkhpulling a reference to an ident returns an entity with the ident's db id as well as its db ident:
(d/q '{:find [(pull ?n [:some/ref])]
:in [$ ?n]}
db 999)
=> [{:some/ref [{:db/id 987 :db/ident :some/ident}]}]
whereas pulling a tuple with a reference to an ident only returns its db id
(d/q '{:find [(pull ?n [:some/tuple])]
:in [$ ?n]}
db 999)
=> [{:some/tuple [987]}]
it would be great if pulling a reference to an ident from a tuple behaved the same way as pulling a reference to an ident from outside a tuple. i could untuple the values in the query constraints and find their idents, but that constrains my results to only entities that have :some/tuple values, unlike pull#2020-10-0111:07joshkhin other words, i can't seem to pull https://docs.datomic.com/cloud/schema/schema-modeling.html#enums from within tuples#2020-10-0114:15vnczHey!
I have recently been looking at Datomic, reading the documentation and watching almost every single video I could possibly find on the internet. I really like what I've seen so far; I do have couple of questions that I still haven't figured out. Any help here would be appreciated.
I've seen this idea that once you get a db from a connection — it's an immutable value and you can do queries on it by leveraging the query engine that's embedded in the application.
That is a great abstraction, but I am assuming that under the hood the peer library will be grabbing the required datoms from the storage engine; that inevitably will go over the network. With that in mind:
• What happens if there's a network failure while fetching the data from the storage? Is the peer library going to retry that automatically? What if it fails continuously? Will I ultimately see a thrown exception out of nowhere?
• What happens if to satisfy a query the peer library needs to grab more data from the storage engine? Is that going to block the thread where the query is being executed? (I'm assuming this depends on whether I'm using the sync or async API)#2020-10-0115:07marshallThe details of the answers to these questions depend a little bit on whether you're talking about the client API (cloud or on-prem) or the peer API (on-prem only)#2020-10-0115:07marshallhttps://docs.datomic.com/on-prem/clients-and-peers.html#2020-10-0115:52vnczAh ok interesting. I'll definitely take a look at it then#2020-10-0122:59vncz@U05120CBV I've just reviewed the document. I guess my confusion is here
> Compared to the Peer API, the Client API introduces a network hop for read operations, increasing latency.
Doesn't the Peer API also need to grab the data from the storage engine? How does the data get delivered then?#2020-10-0123:21marshallPeer reads directly from storage itself, client sends the request to peer server or a cloud node, where the storage read occurs#2020-10-0200:32vnczWell ok, so my point still stands @U05120CBV
• What happens if there's a network failure while fetching the data from the storage? Is the peer library going to retry that automatically? What if it fails continuously? Will I ultimately see a thrown exception out of nowhere?
#2020-10-0200:33marshallYes it will retry. It may eventually time out and/or throw#2020-10-0201:20vnczOk understood. So although the db value is immutable, it might fail to deliver the data in edge cases. That clarifies, thanks a lot!#2020-10-0114:26Sam DeSotaHey all, we just had an issue where ##NaN was transacted into a datomic on-prem db, and a couple weird things happened:
• It was impossible to update the values, unless you manually used db/retract + db/add, just using db/add would not automatically retract ##NaN value
• We also couldn’t search for the ##NaN values with a query
Is this known undefined behavior or a bug that should be reported? Seems like ##NaN values shouldn’t even be allowed to be transacted.#2020-10-0114:26vnczI am also kind of confused of what client I should be using here 🤔#2020-10-0115:08marshallcloud or on-prem ?
or dev-local?#2020-10-0115:51vnczI have a local Datomic instance running on my computer but I could switch to dev-local if that makes the things easier. I'm more curious about why 3 different libraries#2020-10-0116:10marshallif you're using on-prem you can use the peer library or you can use the peer-server & the client-pro library#2020-10-0116:16vncz@U05120CBV Is there a documentation page that explains a little bit the differences and when to use what?#2020-10-0116:17marshallthe clients vs peer page i linked in the other thread#2020-10-0116:26vnczAh all right, I'll check that out before continuing the conversation. Thanks for the help @U05120CBV#2020-10-0114:28pvillegas12Does somebody know how to increase the number of instances in a production topology? Switching the auto scaling group to 3 for example failed when trying to deploy our ion in Datomic Cloud#2020-10-0114:55Joe LaneHave you investigated query groups @U6Y72LQ4A?#2020-10-0114:56Joe LaneSee https://docs.datomic.com/cloud/operation/scaling.html#2020-10-0114:56Joe LaneYou likely DON'T want to be autoscaling your primary group.#2020-10-0115:07zaneI recall someone saying there’s a library out there with a clojure.spec spec for Datomic queries. Does anyone know where I could find it?#2020-10-0117:32JoshThis library defines a bunch of datomic specs https://github.com/alexanderkiel/datomic-spec/blob/master/src/datomic_spec/core.clj#2020-10-0118:12zaneBrilliant. Cheers!#2020-10-0119:58Lennart BuitNote, this is the on prem dialect, cloud (and using client to access a peer server on prem), has slight variations#2020-10-0119:59Lennart BuitFor example, cloud only allows one shape of :find#2020-10-0120:24ivanaIs there any way to check that entity id is temporary? Does (*instance?* datomic.db.DbId val) work?#2020-10-0121:58faviladepends on context. strings and negative numbers can also possibly be tempids#2020-10-0122:35ivanaHm... So, having an entity id, we can not chose the right way to resolve entity, we should add boolean flag is it tempid or not, and then resolve them different ways depending on this flag...#2020-10-0509:45Linus EricssonIn on-prem (maybe also on client) you should use the :tempids key in the transaction result, there and use datomic.api/resolve-tempid to resolve the tempids to realized entity-ids.#2020-10-0514:51ivana@UQY3M3F6D yep, and I do this. But anyway I need a criteria, are some ids temporary, for resolving it that way. Or you suggest to resolve any id as temporary first, and if it is not resolved this way (or throws an exception), then it probably is a real id and trying to use it an non-temporary?#2020-10-0515:00Linus Ericssonif you create your ids with datomic.api/tempid then you can check if they are an instance of datomic.db.DbId. But that requires your code to use the tempid function, of course.
If you transact data with tempids, that already can be resolved to entities (through external indexes and more), the tempids can be resolved to already existing entities, yes. Tempids do not have to create new entities, they can be resolved to already existing entities.#2020-10-0121:25ziltiWhat is Datomic's way to achieve this:
[:find ?dbid .
:in $ ?name ?domain
:where
(or [?dbid :company/name ?name]
[?dbid :company/domain ?domain])]
#2020-10-0121:52souenzzo@zilti or-join #2020-10-0217:39zaneIs it possible to pull all attributes, but with a default? I’m imagining something like:
(pull [(* :default :no-value)] …
#2020-10-0217:39zaneIs it possible to pull all attributes, but with a default? I’m imagining something like:
(pull [(* :default :no-value)] …
#2020-10-0217:43kennyI don't see how that could be possible. "*" means all attributes that an entity has so there can't be a default.#2020-10-0217:44kennyi.e., there is no info on what an entity does not have.#2020-10-0217:45zaneLet me try to explain how I would do it in two queries.#2020-10-0217:48zane(let [attributes (d/q '[:find [?a ...]
:where
[?e ?a _]]
db)
pattern (mapv (fn [attribute]
`(~attribute :default :no-value))
attributes)]
(d/q `[:find (pull $ ~pattern ?e)
:in $
:where
[?e _ _]]
db))#2020-10-0217:48zaneSomething along those lines.#2020-10-0221:53kennyDefinitely not something built in. I'd advise against that. What is your use case?#2020-10-0217:45donyormSo I have the following query:
{:query
{:find [?e],
:in [$ ?string-val-0],
:where
[(or-join
[?e]
(and
[?e :exception/message ?message-0]
[(.contains ?message-0 ?string-val-0)])
(and
[?e :exception/message ?explanation-0]
[?explanation-0 :message/explanation ?explanation-val-0]
[(.contains ?message-0 ?string-val-0)]))]},
:args
[#object[compute.datomic_client_memdb.core.LocalDb 0x2a589f86 "#2020-10-0217:45donyormBut I'm getting Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:57).
:db.error/insufficient-binding [?string-val-0] not bound in expression clause: [(.contains ?message-0 ?string-val-0)], and I'm not sure why#2020-10-0217:59zaneIf you want ?string-val-0 to unify the outer clause you’ll need to include it in the rules-vars vector: (or-join [?e ?string-val-0] …)#2020-10-0217:59zaneAt least I suspect that’s what’s wrong.#2020-10-0217:46donyormIs it not enought to have ?string-val-0 defined in the in list?#2020-10-0222:30nandoI'm trying to figure out how to format db.type/instant values, for display in a UI. Using clojure.java-time as follows:
(:require [java-time :as t])
evaluating
(t/format #inst "2020-09-26T23:08:27.619-00:00")
returns "Sun Sep 27 01:08:27 CEST 2020"
but if I add a custom format
(t/format "dd/MM/yyyy" #inst "2020-09-26T23:08:27.619-00:00")
I get the error
Execution error (ClassCastException) at java-time.format/format (format.clj:50).
java.util.Date cannot be cast to java.time.temporal.TemporalAccessor#2020-10-0222:36nandoAny suggestions for formatting db.type/instant values ?#2020-10-0223:31favilaYou need to coerce the Java.util.date to an instant#2020-10-0223:44nando(t/instant #inst "2020-09-26T23:08:27.619-00:00")
=> #time/instant "2020-09-26T23:08:27.619Z"#2020-10-0223:45nando(t/format "yyyy/MM/dd" (t/instant #inst "2020-09-26T23:08:27.619-00:00"))
=> Execution error (UnsupportedTemporalTypeException) at java.time.Instant/getLong (Instant.java:603).
Unsupported field: YearOfEra#2020-10-0223:47nandothe clojure.java-time docs for t/instant say this function "Creates an Instant" https://cljdoc.org/d/clojure.java-time/clojure.java-time/0.3.2/api/java-time.temporal#instant#2020-10-0300:06nando(t/instant? (.toInstant #inst "2020-09-26T23:08:27.619-00:00")) => true
(t/instant? (t/instant #inst "2020-09-26T23:08:27.619-00:00")) => true#2020-10-0300:23nandoThis works: (t/format ".dd" (t/zoned-date-time 2015 9 28))
=> "2015.09.28"
but I can't find a way to convert a datomic db.type/instant to a zoned-date-time.#2020-10-0300:36nandoThere must be a more straightforward way to format datomic datetime values.#2020-10-0304:45seancorfield@U078GPYL8
user=> (t/format "dd/MM/yyyy" (t/zoned-date-time #inst "2020-09-26T23:08:27.619-00:00" (t/zone-id "UTC")))
"26/09/2020"
user=>
#2020-10-0304:45seancorfield(or whatever TZ you need there)#2020-10-0304:51seancorfieldAlthough if you're dealing with #inst which I believe are just regular java.util.Date objects, this should work (without clojure.java-time at all):
user=> (let [f (java.text.SimpleDateFormat. "dd/MM/yyyy")]
(.format f #inst "2020-09-26T23:08:27.619-00:00"))
"26/09/2020"
user=> #2020-10-0304:53seancorfieldYup, Datomic docs say it's just a java.util.Date:
:db.type/instant instant in time java.util.Date #inst "2017-09-16T11:43:32.450-00:00"
#2020-10-0310:51nando@U04V70XH6 Thanks very much! I've confirmed that both approaches work as expected with an #inst returned from datomic.#2020-10-0311:37nandoI've learned a lot here, both by dipping my toes into the clojure.java-time and tick libraries, and getting a more practical sense of how java interop works through your example.#2020-10-0312:33favila@U078GPYL8 I meant something like this (sorry, was on a phone earlier):
(let [d #inst"2020-10-03T12:18:02.445-00:00"
f (-> (java.time.format.DateTimeFormatter/ofPattern "dd/MM/yyyy")
(.withZone (java.time.ZoneId/systemDefault)))]
(.format f (.toInstant d)))
#2020-10-0312:33favilaI avoid java.text.SimpleDateFormat because it’s the “old” way and it’s not thread-safe#2020-10-0312:35favilaI think what sean posted is nearly the equivalent, except he coerces to a zoned date time instead of specifying the zone in the formatter#2020-10-0312:35favilabut I’m not familiar with clojure.java-time, I just use java.time directly#2020-10-0313:30nando@U09R86PA4 I see what you originally meant evaluating
(let [d #inst"2020-10-03T12:18:02.445-00:00"
f (-> (java.time.format.DateTimeFormatter/ofPattern "dd/MM/yyyy")
(.withZone (java.time.ZoneId/systemDefault)))]
(.format f d))#2020-10-0313:32nandoThe same error is produced without the date being wrapped in a .toInstant call.#2020-10-0313:40nandoIn what type of use case would the fact that SimpleDateFormat is not thread safe produce an unexpected result, particularly in the context of a web application?#2020-10-0314:19favilaDef the format object and then use it in functions running in my tools threads #2020-10-0314:20favila*multiple#2020-10-0314:24favilaThere are a few strata of Java date systems#2020-10-0314:25favilaThe oldest is Java.util.date objects. The newest is Java.time.*, which represents instants as Java.time.Instant objects instead. #2020-10-0314:26favilaThere are some in between that aren’t worth learning anymore#2020-10-0317:55seancorfieldYup, going via Java Time is definitely the safest route and the best set of APIs to learn. At work, over the past decade we've gone from java.util.Date to date-clj (date arithmetic for that old Date type), to clj-time (wrapping Joda Time), to Java Time (with clojure.java-time in some parts of the code and plain interop in a lot of places). Converting java.util.Date to java.time.Instant and doing everything in Java Time is a bit painful/verbose, but you can write utility functions for stuff you need frequently to hide that interop/verbosity.#2020-10-0223:31favilatoInstant method#2020-10-0400:03nandoI'm getting an inconsistent result using the sum aggregate function on dev-local. If I include only the sum function, the result is much less than it should be. If I add the count function to the same query, the result of the sum function is then correct.
Here's the query with only the sum function. There are multiple batch items per batch and I need the total weight of all batch items.
[:find ?e ?formula-name ?doses ?date (sum ?weight)
:keys e formula-name doses date total-weight
:in $ ?e
:where [?e :batch/formula ?fe]
[?fe :formula/name ?formula-name]
[?e :batch/doses ?doses]
[?e :batch/date ?date]
[?bi :batch-item/batch ?e]
[?bi :batch-item/weight ?weight]]
=> :total-weight 1027800#2020-10-0400:12nandoHere's the query with both the sum and count aggregate functions:
[:find ?e ?formula-name ?doses ?date (sum ?weight) (count ?bi)
:keys e formula-name doses date total-weight count
:in $ ?e
:where [?e :batch/formula ?fe]
[?fe :formula/name ?formula-name]
[?e :batch/doses ?doses]
[?e :batch/date ?date]
[?bi :batch-item/batch ?e]
[?bi :batch-item/weight ?weight]]
=> :total-weight 2009250,
:count 45
I've confirmed that 2009250 is the correct amount.
What am I not understanding here?#2020-10-0400:13Joe Lanehttps://docs.datomic.com/cloud/query/query-data-reference.html#with#2020-10-0400:14Joe Lane[:find ?e ?formula-name ?doses ?date (sum ?weight)
:keys e formula-name doses date total-weight
:with ?bi
:in $ ?e
:where [?e :batch/formula ?fe]
[?fe :formula/name ?formula-name]
[?e :batch/doses ?doses]
[?e :batch/date ?date]
[?bi :batch-item/batch ?e]
[?bi :batch-item/weight ?weight]]#2020-10-0400:14Joe LaneWhat does that return?#2020-10-0400:15nandoReading now .... so duplicates are being excluded from the sum?#2020-10-0400:17nandoIt's correct now!#2020-10-0400:18nandoThat's subtle. Thanks @lanejo01#2020-10-0400:19Joe LaneDoes the concept of a set vs a bag make sense to you from the docs?#2020-10-0400:21nandoI understood immediately that duplicates might be excluded from (sum ...) when I saw the example, but that's not what one would expect from a sum function.#2020-10-0400:21nando2 + 2 = 2 ???#2020-10-0400:23nandoSo I think it might be good to point this out in the sum section of the documentation (if it isn't there already)#2020-10-0400:24Joe LaneIt's not related to the sum aggregate though, it's related to whether or not you want a bag vs a set of the ?bi lvar.#2020-10-0400:24Joe LaneIt's a more general concept.#2020-10-0400:24nando;; query
[:find (sum ?count)
:with ?medium
:where [?medium :medium/trackCount ?count]]
I see an example is in there, but I didn't understand the signficance.#2020-10-0400:27nandoI understand it doesn't only apply to the sum aggregate. I'm only saying that if it has a non-obvious impact on a specific aggregate function, it might be helpful for beginners like me to point that out.#2020-10-0400:30seancorfieldInteresting. I hadn't learned enough about Datomic to realize it specifically deals in sets by default instead of bags...#2020-10-0400:34nandoIt is still quite vague to me when a query would return a set.#2020-10-0400:36nandoI guess it has to be kept always in mind, because as the example in the documentation on With Clauses shows, it isn't only an issue with some aggregate functions.#2020-10-0401:03nando@lanejo01 Here's a specific suggestion for the docs that might help to make this more clear for beginners. In the subsection on sum where it says
"The following query uses sum to find the total number of tracks on all media in the database."
You might change that to something like
"The following query uses sum to find the total number of tracks on all media in the database. Note carefully the use of the with-clause in the query so that all trackCounts are summed. If the with-clause is excluded, only unique trackCounts will be summed."#2020-10-0511:57onetomwe have upgraded clojure cli tools x.x.x.590 to 1.10.1.697, then the following error appeared:
$ clj -Srepro -e "(require 'datomic.client.api)"
WARNING: When invoking clojure.main, use -M
Execution error (FileNotFoundException) at clojure.core.async.impl.ioc-macros/eval774$loading (ioc_macros.clj:12).
Could not locate clojure/tools/analyzer__init.class, clojure/tools/analyzer.clj or clojure/tools/analyzer.cljc on classpath.
i think im on the latest dependencies in my ./deps.edn file:
org.clojure/clojure {:mvn/version "1.10.1"}
com.datomic/client-cloud {:mvn/version "0.8.102"}
com.datomic/ion {:mvn/version "0.9.48"}
#2020-10-0512:50onetomI tried it with both nixpkgs.jdk8 and jdk11.
I tried it with and without the deps overrides recommended by the latest ion-dev push operation.
the error is always the same.
I have no other dependencies specified and still get this error.
I guess I can specify this missing dependecy explicitly, but it feels like I'm doing something wrong, if such a bare bones ion project doesn't work out of the box.#2020-10-0513:10Alex Miller (Clojure team)Can you share your full deps.edn?#2020-10-0513:36onetom{:paths
["src" ;"rsc" "classes"
]
:deps
{
org.clojure/clojure {:mvn/version "1.10.1"}
com.datomic/client-cloud {:mvn/version "0.8.102"}
com.datomic/ion {:mvn/version "0.9.48"}
;org.clojure/data.json {:mvn/version "0.2.6"}
;http-kit/http-kit {:mvn/version "2.5.0"}
;metosin/reitit-ring {:mvn/version "0.5.6"}
;org.clojure/tools.analyzer {:mvn/version "1.0.0"}
;; Deps to avoid conflicts with Datomic Cloud
;; commons-codec/commons-codec #:mvn{:version "1.13"},
;; com.fasterxml.jackson.core/jackson-core #:mvn{:version "2.10.1"},
;; com.amazonaws/aws-java-sdk-core #:mvn{:version "1.11.826"},
;; com.cognitect/transit-clj #:mvn{:version "0.8.319"},
;; com.cognitect/s3-creds #:mvn{:version "0.1.23"},
;; com.amazonaws/aws-java-sdk-kms #:mvn{:version "1.11.826"},
;; com.amazonaws/aws-java-sdk-s3 #:mvn{:version "1.11.826"}
}
:mvn/repos
{"datomic-cloud"
{:url ""}}
:aliases
{:test
{:extra-paths
["test"]
:extra-deps
{nubank/matcher-combinators {:mvn/version "3.1.3"}
lambdaisland/kaocha {:mvn/version "1.0.700"}}}}
}
#2020-10-0513:40onetomi tried brew install clojure and run /usr/local/bin/clojure directly; same result.
i haven't tried it under linux yet, but it feels like a tools.deps.alpha issue.#2020-10-0513:43Alex Miller (Clojure team)I don't think the os matters so no reason to do that#2020-10-0513:46onetomi just retried again on a different machine:
no error:
$ /nix/store/0v7kwppxygj3wln9j104vfi1kx21fssj-clojure-1.10.1.590/bin/clojure -Srepro -e "(require 'datomic.client.api)"
analyzer error:
$ /nix/store/9g4xqjpzi7vkr5a5n2q3fd1cyymvh68r-clojure-1.10.1.697/bin/clojure -Srepro -e "(require 'datomic.client.api)"
#2020-10-0513:49onetomthese are the differences in the dependency tree:
$ diff -u <(/nix/store/0v7kwppxygj3wln9j104vfi1kx21fssj-clojure-1.10.1.590/bin/clojure -Srepro -Stree) <(/nix/store/9g4xqjpzi7vkr5a5n2q3fd1cyymvh68r-clojure-1.10.1.697/bin/clojure -Srepro -Stree)
--- /dev/fd/63 2020-10-05 21:48:46.147069426 +0800
+++ /dev/fd/62 2020-10-05 21:48:46.147513695 +0800
@@ -31,10 +31,6 @@
com.datomic/client-api 0.8.54
org.clojure/core.async 0.5.527
org.clojure/tools.analyzer.jvm 0.7.2
- org.clojure/tools.analyzer 0.6.9
- org.clojure/tools.reader 1.0.0-beta4
- org.clojure/core.memoize 0.5.9
- org.ow2.asm/asm-all 4.2
com.cognitect/http-client 0.1.105
org.eclipse.jetty/jetty-http 9.4.27.v20200227
org.eclipse.jetty/jetty-io 9.4.27.v20200227#2020-10-0513:54Alex Miller (Clojure team)I'm looking at it, give me a bit#2020-10-0514:50Alex Miller (Clojure team)this is a tools.deps bug - it's pretty subtle and will take me a bit to isolate and fix properly but adding a top level dep on org.clojure/core.async 0.5.527 should be a sufficient workaround for the moment#2020-10-0514:50onetomthank you!#2020-10-0514:53onetomwith that core.async, it worked on my side too#2020-10-0523:33Alex Miller (Clojure team)hey, a new prerelease of clj is out if you'd like to test it - 1.10.1.708, will promote to stable after a bit more use#2020-10-0523:34Alex Miller (Clojure team)and I guess I implied but should say that it fixes this problem - thanks for the report, it would have been challenging to find this otherwise!#2020-10-0514:58kennytilton@nando I am a Datomic noob myself, but I got curious about the proposed enhanced doc (+1 on that, btw) and how :with might work and ran a little experiment:
(d/q '[:find ?year
:with ?language
:where [?artist :artist/name "Bob Dylan"]
[?release :release/artists ?artist]
[?release :release/year ?year]
[?release :release/language ?language]]
db) ;; => [[1968] [1973] [1969] [1970] [1971]]
So to my unwitting eyes, the :with per se does not block collapsing of duplicates: rather, one must concoct a :with clause based on domain knowledge to force a bag with the desired population over which to aggregate. Maybe? :shrug:#2020-10-0515:11favilaThe columns in the initial set are with+find, (in this case ?year ?language), then aggregation happens, then the :with columns are removed (in this case ?language) leaving a bag#2020-10-0515:11favilamaybe it’s easier to think of it as :find ?year ?language :removing ?language#2020-10-0515:12favilainstead of :find ?year :with ?language#2020-10-0515:25nandoThat's a very helpful explanation.#2020-10-0515:15nando@hiskennyness I can only respond by saying that I think :with is a very important clause to understand, and I'm not sure I fully understand it yet. Your example is the first I've seen targeting an attribute rather than an entity id. I don't have sufficient grasp of the inner workings of datomic or concept behind :with to make a guess how that works, but I'm easily confused.#2020-10-0515:24kennytiltonYou remind me of this gem: https://ace.home.xs4all.nl/Literaria/Poem-Graves.html.
I have been toying with doing a ground-up Datomic for the Easily Confused tutorial series, maybe I should do it as I struggle up my own learning curve.#2020-10-0515:40nandoIf you do a tutorial series, of course please send the link!#2020-10-0516:29timohow do I create a client with datomic.client.api with datomic free?#2020-10-0517:18timocan I even use datomic.client.api with datomic free?#2020-10-0517:21Michael WYes you have to run a peer server with datomic free to do that. See: https://docs.datomic.com/on-prem/peer-server.html#2020-10-0517:21marshall@timok Datomic free does not support peer server#2020-10-0517:21marshall@timok You should look at dev-local: https://docs.datomic.com/cloud/dev-local.html#2020-10-0517:21marshall^ no cost way to use datomic client library locally#2020-10-0517:21Michael WI have it running a peer server here...#2020-10-0517:22marshallDatomic Pro Starter, which is free (no cost) does include peer server#2020-10-0517:22marshallhttps://www.datomic.com/get-datomic.html#2020-10-0517:22Michael WOk so I am running that not free then. Sorry for the confusion.#2020-10-0517:23timoalright, thanks...will try dev-local then#2020-10-0522:38onetomi would like to implement some cognito triggers, using ions.
how can i see what payload does an api service calls an ion with?
can i "log" such info to some standard location easily?
for example, where can i see, if i just clojure.pprint/pprint something in an ion?#2020-10-0523:08marshall@onetom https://docs.datomic.com/cloud/ions/ions-monitoring.html#events#2020-10-0613:42xcenoNot sure if I run into the same bug as @onetom reported above, or if I don't understand the docs correctly.
I've added both, con.datomic/dev-local and com.datomic/client-cloudto my project. Now whenever I try to call (d/client <some-cfg>) , it crashes with:
> Syntax error (FileNotFoundException) compiling at (datomic/client/impl/shared.clj:1:1).
> Could not locate cognitect/hmac_authn__init.class, cognitect/hmac_authn.clj or cognitect/hmac_authn.cljc on classpath. Please check that namespaces with dashes use underscores in the Clojure file name
If I only add one or the other dependency it works fine. So I can either connect locally or to datomic-cloud, but as I understand we should be able to add both as a dependency at the same time and then either construct one or the other client, or call divert(?)
I'm still on clj 1.10.1.536
You can replicate the behaviour by simply checking out the ion-starter project and adding dev-local as a dependency. Leave everything else unchanged and try to create a client.#2020-10-0613:50xcenoOh, I just found this post in the forums: https://forum.datomic.com/t/dev-and-test-locally-with-dev-local/1518/9
So, nevermind. I'll try the latest clj then#2020-10-0613:52Alex Miller (Clojure team)if you do upgrade to latest stable clj (1.10.1.697) you are likely to run into the issue that @onetom was seeing (these issues are related) so you might actually need to go to the prerelease (1.10.1.708) or wait for that to be promoted to stable, should be soon#2020-10-0614:01xcenoNice thanks!
I was just going to ask if I can install the pre-release via homebrew for linux, but I can also use that script for now#2020-10-0614:02Alex Miller (Clojure team)prereleases are not in brew (or they'd be releases) but you can just follow the instructions at https://clojure.org/guides/getting_started but with that version number on linux#2020-10-0614:10Alex Miller (Clojure team)just fyi, if you are posting large text, using a snippet (the lightning bolt in the bottom left of the edit pane) will fold it and syntax highlight it#2020-10-0614:29onetomI'm posting from mobile and that lightning icon brings some search dialog up, but thx for the feedback; I will try to figure this out#2020-10-0614:30onetombtw, what's the license of the Datomic CLI?
I haven't found any mention of that in the ZIP file#2020-10-0614:58Alex Miller (Clojure team)you should ask in main channel, don't know#2020-10-0614:40onetomNix package for the Datomic Cloud CLI Tools#2020-10-0614:42onetomAdaptation of the official Clojure CLI Nix package to the latest version: 1.10.1.708#2020-10-0621:47zhuxun2Is there a way to subscribe to entity changes in Datomic?#2020-10-0621:48Lennart BuitWould the tx-report-queue work ^^: https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/tx-report-queue ?#2020-10-0621:54zhuxun2@UDF11HLKC Looks like it's what I'm looking for, thanks!#2020-10-0622:18schmeeheads up: tx-report-queue does not exist in Datomic Cloud#2020-10-0701:27mruzekwIs there an alternative for this? ^#2020-10-0700:08cjmurphyIs there a library out there such that whether using :peer or :client is mostly unknown from your application code's point of view?#2020-10-0706:32Lennart BuitYou can use the client library to connect to an on-prem peer server. The client library is the lowest common denominator in that sense ^^#2020-10-0708:42cjmurphyYes, in a sense the peer library is on the way out?? I was asking because I use such a peer/client ambivalent library internally, and was thinking to make it open source.#2020-10-0709:01Lennart BuitWell I’m not at cognitect, so I can’t answer that. But if you intend to migrate from on-prem to the cloud at some point, you are better off with the client api, so it appears to be ‘good advice’ to start new projects with the client api#2020-10-0709:06Lennart BuitAlso; there are quite a few subtle differences between the api’s. The query dialect is slightly different (most notably, not all :find specs in peer are supported in client), and many functions have slightly different arguments/return values (for example, many functions return deref’able things in peer, but direct results in client).#2020-10-0714:01cjmurphyI must have ignored the 'good advice' initially, hence the compatibility library. For the :find differences I've just changed all the queries to work for the client, which means they can work for both. Other subtleties I've found have just been taken care of by the library - basically it is a map that includes some wrapper functions that choose to use one function or the other - for example either one of the two variants of transact , depending on the 'mode' being used.#2020-10-0712:07xcenoCan't I build/push a datomic ion that includes an alias? Meaning: I have an application and bundle datomic-ion specific stuff under a :datomic alias. I then want to push it like this: clojure -M:datomic:ion-dev '{:op :push}', but when I check the zip file that's generated all the stuff from my alias (specific namespaces & resources) is missing. Am I doing something wrong or is this just not supported?#2020-10-0713:54marshall@rob703 I believe you want -A:aliases#2020-10-0713:54marshallnot -M#2020-10-0713:58Alex Miller (Clojure team)not with new clj ...#2020-10-0713:59xcenoI actually tried both and various combinations thereof yesterday. My initial cmd was clojure -A:datomic -M:ion-dev ... and I also tried the variant straight from the docs: clojure -A:datomic:ion-dev -m datomic.ion.dev '{:op :push (options)}' to no avail. But I can double check again.
On another note:
Right now, I've pulled all my deps from the alias into my main deps so I can at least try the deployment of my lambda proxy.
Now I'm a step further, I see in the Api Gateway console that my app is returning a proper ring response, but the lambda crashes with a 502 error:
> Wed Oct 07 13:51:50 UTC 2020 : Execution failed due to configuration error: Malformed Lambda proxy response
> Wed Oct 07 13:51:50 UTC 2020 : Method completed with status: 502
The only thing I don't see in my response body is the isBase64Encoded flag, so maybe that's the issue right now#2020-10-0714:49Alex Miller (Clojure team)which doc was clojure -A:datomic:ion-dev -m datomic.ion.dev '{:op :push (options)}' from ?#2020-10-0715:06xcenoFrom what I've seen so far it's every command in the ion tutorials, e.g. https://docs.datomic.com/cloud/ions/ions-reference.html#push#2020-10-0715:08xcenoThe ion tutorials would also need some updates regarding the new AWS Api-Gateway Options, see here: https://clojurians.slack.com/archives/C03RZMDSH/p1601380150089100#2020-10-0715:09Alex Miller (Clojure team)thanks those commands seem wrong - if using :ion-dev with a :main-opts, the -m datomic.ion.dev isn't need there /cc @U05120CBV#2020-10-0715:46marshallI’ve updated the commands in the reference doc. Thanks @rob703#2020-10-0713:59xcenoAh yeah and I updated to the latest clj yesterday#2020-10-0713:59xcenoso that's why I converted to -M#2020-10-0714:09Alex Miller (Clojure team)yes, that clj syntax is fine with the new clj (but I don't think that has anything to do with your issue)#2020-10-0714:09vnczHey, how do I create a new database in Datomic-Local?#2020-10-0714:11marshallIf you mean dev-local, once you’ve made a client (https://docs.datomic.com/cloud/dev-local.html#using) you can use it exactly the same way as you would using client against cloud (i.e. you can call create-database https://docs.datomic.com/client-api/datomic.client.api.html#var-create-database)#2020-10-0714:12vnczI do recall creating the database from the cmd line argument when using Datomic on premise locally on my computer#2020-10-0714:12vnczMy memory might be flaking though#2020-10-0714:40marshallgenerally that would not be the case unless you were just using peer-server with a mem database#2020-10-0714:41vnczTotally my memory flaking then#2020-10-0714:10marshall@rob703 what versions of the various ion tools are you using? I’m going to try to reproduce/investigate#2020-10-0714:13xcenoThank you!
This is part of my deps edn:#2020-10-0714:13xcenoSo initially, all those deps where under my :datomic alias#2020-10-0714:19xcenoOh and I'm using the latest ion-dev tools as an alias in my user config#2020-10-0714:19xcenobasically just following the tutorial#2020-10-0714:39marshall@rob703 the zip file that is created will not contain all of your deps themselves#2020-10-0714:39marshallit only contains your ion code
The deps are fetched when you deploy#2020-10-0714:39marshallwere you seeing a problem with your actual deploy#2020-10-0714:41xceno> it only contains your ion code
Yes, that's the other part of my problem, it's not only the deps but also the additional paths.
For example:
:aliases {:datomic {:extra-paths ["src/datomic"]}}
My entire code in the datomic folder is missing#2020-10-0714:50marshallyep, I’ve reproduced that behavior. looking into it further now#2020-10-0714:53xcenoThank you!#2020-10-0714:54marshallfor now I would say you’ll want to put those extra paths in the main body of the deps, not in an alias#2020-10-0714:55xcenoYeah I moved everything from my alias up for now.
I'm now battling with AWS itself, trying to get the lambda proxy to work. But that's another issue in itself#2020-10-0723:14m0smithIs there a clear example of using :db.entity/preds? I have defined it as {:db/ident :transaction/pred
:db.entity/preds 'ledger.transaction/existing-transaction-entity-pred}#2020-10-0723:15m0smithWhen I try and transact with (d/transact conn {:tx-data [{:transaction/user-id #uuid "9550f401-fb16-4e42-8940-d683dbad3a3d" :transaction/txn-hash "Pl3b9f7ba2-eb0d-412d-b305-f76b5150c711" :db/ensure :transaction/pred}]})#2020-10-0723:16m0smithI get Execution error (IndexOutOfBoundsException) at datomic.core.datalog/bound-consts$fn (datalog.clj:1570).#2020-10-0723:16m0smithAny hints?#2020-10-0723:19m0smithAfter taking a closer look at the stack trace, the predicate is being called but erroring#2020-10-0801:21ziltiIs it a known bug that when there's a bunch of datums that get transacted simultaneously, it can randomly cause a :db.error/tempid-not-an-entity tempid '17503138' used only as value in transaction error?#2020-10-0801:36favilaThe meaning of this error is that the string “17503138” is used as a tempid that is the value of an assertion, but there is no place where the tempid is used as the entityid of an assertion; the latter is necessary for datomic to decide whether to mint a new entity id or resolve it to an existing one#2020-10-0801:37ziltiWell, as you can see in the actual datums I posted, it clearly is being used as :db/id.#2020-10-0801:38ziltiI had my program dump all datums into a file before transacting, and I copied the two that refer to this string over into here#2020-10-0801:38favilaIn your example, I see the second item says :account/accounts “17503138”. Are both these maps together in the same transaction?#2020-10-0801:39favila(Btw a map is not a datum but syntax sugar for many assertions—it’s a bit confusing to call it that)#2020-10-0801:40ziltiYes, they are both together in the same transaction.
True, I mixed up the terminology... Entity would be more fitting#2020-10-0801:43favilaIf they are indeed both in the same tx I would call that a bug. Can you reproduce?#2020-10-0801:44favilaWhy is each map in its own vector?#2020-10-0801:44ziltiYes, reliably, every time with the same dataset. Both locally with a dev database as well as on our staging server using PostgreSQL.#2020-10-0801:45ziltiConformity wants it that way, for some reason#2020-10-0801:45favilaConformity for data?#2020-10-0801:45ziltiI had that same issue a while back in a normal transaction without conformity as well though#2020-10-0801:45favilaSeparate vectors in conformity means separate transactions...#2020-10-0801:45ziltiThe migration library called conformity#2020-10-0801:48favilaI’ve only ever used conformity for schema migrations; using it for data seems novel; but I’m suspicious that these are really not in the same transaction#2020-10-0801:49favilaSee if you can get it to dump the full transaction that fails and make sure both maps mentioning that tempid are in the same transaction#2020-10-0801:22ziltiIt is often caused by one single entry that is the same structure as many others. Everything is fine, but for some reason, Datomic doesn't like it. Removing that one entry solves the problem.#2020-10-0812:55marshallwhy are both of those entity maps in separate vectors?
If you’re adding them with d/transact , all of the entity maps and/or datoms passed under the :tx-data key need to be in the same collection#2020-10-0812:56marshallbased on the problem you described, I would expect that error if you transacted the first of those, and then tried the second of those in a separate transaction#2020-10-0812:56marshallif they’re asserted in the same single transaction it should be fine#2020-10-0801:23ziltiOrdering of the entries in the transaction vector doesn't seem to matter either#2020-10-0801:26ziltiThe two datums causing problems:
[{:account/photo
"REDACTED",
:account/first-name "REDACTED",
:account/bio
"REDACTED",
:account/email-verified? false,
:account/location 2643743,
:account/vendor-skills [17592186045491],
:account/id #uuid "dd33747e-5c13-4779-8c23-9042460eb3f3",
:account/vendor-industry-experiences [],
:account/languages [17592186045618 17592186045620],
:account/vendor-specialism 17592186045640,
:account/links
[{:db/id "REDACTED",
:link/id #uuid "ea51184c-d027-44d0-8f20-df222e58daf3",
:link/type :link-type/twitter,
:link/url "REDACTED"}
{:db/id
"REDACTED",
:link/id #uuid "c9577ca4-332d-41f0-b617-c00e89fc94b4",
:link/type :link-type/linkedin,
:link/url
"REDACTED"}],
:account/last-name "REDACTED",
:account/email "REDACTED",
:account/vendor-geo-expertises
[17592186045655 17592186045740 17592186045648],
:db/id "17503138",
:account/vendor-type 17592186045484,
:account/roles [:account.role/vendor-admin],
:account/job-title "Investor"}]
and
[{:account/primary-account "17503138",
:company/headline "REDACTED",
:account/accounts ["17503138"],
:tenant/tenants [[:tenant/name "REDACTED"]],
:company/name "REDACTED",
:company/types [:company.type/contact],
:db/id "REDACTED",
:company/id #uuid "ee26b11f-53ba-43f9-a59b-f7ad1a408d41",
:company/domain "REDACTED"}]#2020-10-0809:02Adrian SmithDuring a meetup recording that I haven't uploaded yet I recorded my own maven private token from https://cognitect.com/dev-tools/view-creds.html is there a way I can regenerate that token?#2020-10-0813:10marshallCan you send an email to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> and we will help with this?#2020-10-0821:14Adrian Smiththank you, I've just sent an email over#2020-10-0811:38BlackHey, I just missing something and can't figure out what. I am calling tx on datomic:
(defn add-source [conn {:keys [id name]
:or {id (d/squuid)}}]
(let [tx {;; Source initial state
:db/id (d/tempid :db.part/user)
:source/id id
:source/storage-type :source.storage-type/disk
:source/job-status :source.job-status/dispatched
:source/created (java.util.Date.)
:source/name name}]
@(d/transact conn [tx])))
;; and then later API will call
(add-source conn entity-data)
After I call add-source entity is created, but after another call is made old entity is rewritten, only if I call transact with multiple transactions I can create multiple entities, but other than that old entity is being rewritten. I am new to datomic, and I can't find any resources about that, can anyone help?#2020-10-0812:19favilatempids resolve to existing entities if you assert a :db.unique/identity attribute value on them that already exists. Are any of these attributes :db.unique/identity? are you sure you are not supplying an id argument to your function?#2020-10-0812:20favila(btw I would separate transaction data creation into a separate function so it’s easier to inspect)#2020-10-0812:23Black{:db/doc "Source ID"
:db/ident :source/id
:db/valueType :db.type/uuid
:db/cardinality :db.cardinality/one
:db/id #db/id [:db.part/db]
:db.install/_attribute :db.part/db}#2020-10-0812:23Blackthis is schema for source/id, I am not using :db.unique/identity#2020-10-0812:24BlackAnd I agree with separation tx and creation but first I would like to get it work#2020-10-0812:25BlackId I removed :db/id from transaction, I shoud still be able to create new entity, right? But everytime first one is rewritten#2020-10-0812:26favilacan you give a clearer get/expect case? maybe a repl console?#2020-10-0812:28favilasomething that shows you calling add-source twice with the returned tx data, and pointing out what you think is wrong with the result of the second call?#2020-10-0812:42BlackOk I had unique on other parameter:
{:db/doc "Source name"
:db/ident :source/name
:db/unique :db.unique/identity
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/id #db/id [:db.part/db]
:db.install/_attribute :db.part/db}
If I removed it, all entities are created and it works how I expected. So I will read more about unique attribute, thanks @U09R86PA4 I would not noticed it without your help!#2020-10-0814:45ziltiWell, I guess I am going to do my migrations using a home-made solution now. I just lost all trust in Conformity. It doesn't write anything to the database most of the time I noticed.#2020-10-0814:46ziltiOr are there alternatives?#2020-10-0814:50ghadican you describe your problem with conformity in more detail?#2020-10-0815:21ziltiI have a migration that is in a function. Conformity runs the function normally, but instead of transacting the data returned from it, it just discards it. The data is definitely valid; I made my migration so it also dumps the data into a file. I can load that file as EDN and transact it to the db using d/transact perfectly fine.#2020-10-0815:23ziltiConformity doesn't even give an error, it just silently discards it.#2020-10-0815:28ghadiis this cloud or on prem?#2020-10-0815:30ziltiOn prem, both for the dev backend and the postgresql one#2020-10-0815:33ghadinot sure what to tell you. you need to analyze this further before throwing up your hands#2020-10-0815:37favilaConformity does bookkeeping to decide whether a “conform” was already run on that database. If you’re running the same key name against the same database a second time, it won’t run again. Is that what you are doing?#2020-10-0815:38favilaConformity is really for schema management, not data imports#2020-10-0815:40ziltiNo, that is not what I am doing.#2020-10-0815:41ziltiWell, the transaction is changing the schema, and then transforming the data that is in there.#2020-10-0815:41ziltiOr at least, that is what it is supposed to be doing.#2020-10-0815:42ghadihttps://github.com/avescodes/conformity#norms-versioning#2020-10-0815:42favilahttps://github.com/avescodes/conformity#norms-versioning#2020-10-0815:42ghadijinx#2020-10-0815:42favilajinx#2020-10-0815:42favilaWe’re pointing out a case where it may evaluate the function but not transact#2020-10-0815:44favilayou can use conforms-to? to test whether conformity thinks the db already has the norm you are trying to transact#2020-10-0815:44favilathat may help you debug#2020-10-0815:47ziltiWell, what is the second argument to conforms-to? ? It's neither the file name nor the output of c/read-resource#2020-10-0815:49ziltiIt wants a keyword, but what keyword?#2020-10-0815:55favilathe keyword in the conform map#2020-10-0815:56favila{:name-of-norm {:txes [[…]] :requires […] :tx-fn …}}#2020-10-0815:56favilathe :name-of-norm part#2020-10-0815:57favilathat’s the “norm”#2020-10-0815:09Filipe Silvaheya, coming here for a question about datomic cloud. I've noticed that while developing on a repl, I get exceptions as described in the datomic.api.client api:
All errors are reported via ex-info exceptions, with map contents
as specified by cognitect.anomalies.
See .
But on the live system, these exceptions don't seem to be ex-info exceptions, just normal errors. At any rate, ex-data returns nil for them. Does anyone know if this is intended? I couldn't find information about this differing behaviour.
A good example of these exceptions is malformed queries for q . On the repl, connected via the datomic binary, I get this return from ex-data
{:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message \"Query is referencing unbound variables: #{?string}\", :variables #{?string}, :db/error :db.error/unbound-query-variables, :dbs [{:database-id \"48e8dd4d-84bb-4216-a9d7-4b4d17867050\", :t 97901, :next-t 97902, :history false}]}
But on the live system, I get nil.#2020-10-0815:09marshall@filipematossilva are you using the same API (sync or async) in both cases?#2020-10-0815:11Filipe Silvathink so, yeah#2020-10-0815:11Filipe Silvahave a ion handling http requests directly, and the repl is calling the handler that's registered on the ion#2020-10-0815:11Filipe Silvaso it should be the same code running#2020-10-0815:12Filipe Silvawe can see on the aws logs that the error is of a different shape#2020-10-0815:12Filipe Silvalet me dig it up#2020-10-0815:13Filipe Silvaon the aws logs, logging the exception, shows this#2020-10-0815:13Filipe Silva{
"Msg": "Alpha API Failed",
"Ex": {
"Via": [
{
"Type": "com.google.common.util.concurrent.UncheckedExecutionException",
"Message": "clojure.lang.ExceptionInfo: :db.error/not-a-binding-form Invalid binding form: :entity/graph {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message \"Invalid binding form: :entity/graph\", :db/error :db.error/not-a-binding-form}",
"At": [
"com.google.common.cache.LocalCache$Segment",
"get",
"LocalCache.java",
2051
]
},
{
"Type": "clojure.lang.ExceptionInfo",
"Message": ":db.error/not-a-binding-form Invalid binding form: :entity/graph",
"Data": {
"CognitectAnomaliesCategory": "CognitectAnomaliesIncorrect",
"CognitectAnomaliesMessage": "Invalid binding form: :entity/graph",
"DbError": "DbErrorNotABindingForm"
},
"At": [
"datomic.core.error$raise",
"invokeStatic",
"error.clj",
55
]
}
],#2020-10-0815:13Filipe Silva(note: this was not the same unbound var query as above)#2020-10-0815:14Filipe Silvaprinting the error on the repl, we see this instead
#error {
:cause "Invalid binding form: :entity/graph"
:data {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "Invalid binding form: :entity/graph", :db/error :db.error/not-a-binding-form, :dbs [{:database-id "48e8dd4d-84bb-4216-a9d7-4b4d17867050", :t 97058, :next-t 97059, :history false}]}
:via
[{:type clojure.lang.ExceptionInfo
:message "Invalid binding form: :entity/graph"
:data {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "Invalid binding form: :entity/graph", :db/error :db.error/not-a-binding-form, :dbs [{:database-id "48e8dd4d-84bb-4216-a9d7-4b4d17867050", :t 97058, :next-t 97059, :history false}]}
:at [datomic.client.api.async$ares invokeStatic "async.clj" 58]}]#2020-10-0815:14marshallthat ^ is an anomaly#2020-10-0815:14marshallwhich is a data map#2020-10-0815:15Filipe Silvamore precisely, (ex-data e) returns the anomaly inside that exception#2020-10-0815:16marshallah, instead of ex-info ?#2020-10-0815:17Filipe SilvaI imagine the datomic client wraps the exception doing something like (ex-info e anomaly cause)#2020-10-0815:18Filipe Silvawe're not wrapping it on our end, just calling ex-data over it to get the anomaly#2020-10-0815:18Filipe Silvabut on the live system, ex-data over the exception returns nil#2020-10-0815:18Filipe Silvawhich I think means it wasn't created with ex-info#2020-10-0815:20Filipe SilvaI mean, I wouldn't be surprised if this is indeed intended to not leak information on the live system#2020-10-0815:20Filipe Silvathat anomaly contains database ids, time info, and history info#2020-10-0815:21Filipe Silvajust wanted to make sure if it was intended or not before working around it#2020-10-0815:29ghadi@filipematossilva are you saying that you are not able to get a :cognitect.anomalies/incorrect from your failing query on the client side?#2020-10-0815:34Filipe Silvaif by client side you mean "what calls the live datomic cloud system", then yes, that's it#2020-10-0815:35ghadi@filipematossilva so what's different about your "live system" vs. the repl?#2020-10-0815:35ghadiclearly it's an ex-info at the repl#2020-10-0815:36Filipe SilvaI really don't know, that's what prompted this question#2020-10-0815:36ghadiperhaps print (class e) and (supers e) in your live system when you get the error#2020-10-0815:36ghadior (Throwable->map e)#2020-10-0815:36ghadisync api or async api?#2020-10-0815:37Filipe Silvasync#2020-10-0815:38Filipe Silvaregarding printing the error#2020-10-0815:39Filipe SilvaI'm printing the exception proper like this:
(cast/alert {:msg "Alpha API Failed"
:ex e})#2020-10-0815:39ghadido you have wrappers/helpers around your query? running it in a future?#2020-10-0815:39Filipe Silvaon the live system the cast prints this#2020-10-0815:39Filipe Silvahttps://clojurians.slack.com/archives/C03RZMDSH/p1602169996347800#2020-10-0815:39ghadioh, yeah that's a com.google.common.util.concurrent.UncheckedExecutionException
at the outermost layer#2020-10-0815:40ghadithen the inner exception is an ex-info#2020-10-0815:40Filipe Silvaon the repl, when cast is redirected to stderr, the datomic binary shows this#2020-10-0815:40ghadithanks. @marshall ^#2020-10-0815:40Filipe Silva#2020-10-0815:44Filipe Silvajust realized that the logged response there on the live system wasn't complete, let me fetch the full thing#2020-10-0815:46Filipe Silvaok this is the full casted thing on aws logs#2020-10-0815:46Filipe Silva#2020-10-0815:47ghadiunderstood#2020-10-0815:49Filipe Silvanow that I look at the full cast on life, I can definitely see the cause and data fields there#2020-10-0815:50Filipe Silvawhich leaves me extra confused 😐#2020-10-0815:50ghadilet me clarify:#2020-10-0815:51ghadiin your REPL, you are getting an exception that is:
* clojure.lang.ExceptionInfo + anomaly data
in your live system you are getting:
* com.google.common.util.concurrent.UncheckedExecutionException
* clojure.lang.ExceptionInfo + anomaly data#2020-10-0815:52ghadiwhere the Ion has the ex-info as the cause (chained to the UEE)#2020-10-0815:52ghadimake sense? seems like a bug @marshall#2020-10-0815:53ghadito work around temporarily, you can do (-> e ex-cause ex-data) to unwrap the outer layer#2020-10-0815:53ghadiand access the data#2020-10-0815:53Filipe SilvaI can see that via indeed shows different things, as you say#2020-10-0815:54Filipe Silvabut the toplevel still shows data and cause for both situations#2020-10-0815:55Filipe SilvaI imagine that data would be returned from ex-data#2020-10-0815:56Filipe Silvalet me edit those code blocks to remove the trace, I think it's adding a lot of noise and not helping#2020-10-0815:57Filipe Silvadone#2020-10-0815:59Alex Miller (Clojure team)I think it's important to separate the exception object chain from the data that represents it (which may pull data from the root exception, not from the top exception)#2020-10-0816:00Alex Miller (Clojure team)Throwable->map for example pulls :cause, :data, :via from the root exception (deepest in the chain)#2020-10-0816:02Filipe Silva@alexmiller it's not clear to me what you mean by that in the current context#2020-10-0816:03Filipe Silva(besides the factual observation)#2020-10-0816:04Filipe Silvais it that you also think that the different behaviour between the repl+datomic binary and live system should be overcome by calling Throwable->map prior to extracting the data via ex-data?#2020-10-0816:05ghadiroot exception is the wrapped ex-info#2020-10-0816:06ghadiyou could do (-> e Throwable->map :data) to get at the :incorrect piece#2020-10-0816:06Alex Miller (Clojure team)I’m just saying that the data you’re seeing is consistent with what Ghadi is saying#2020-10-0816:06Alex Miller (Clojure team)Even though that may be confusing#2020-10-0816:07Filipe Silvaok I think I understand what you mean now#2020-10-0816:07Filipe Silvathank you for explaining#2020-10-0816:07ghadibut the inconsistency is a bug 🙂#2020-10-0816:19Filipe Silvacurrently deploying your workaround, and testing#2020-10-0816:20marshall@filipematossilva this is in an Ion correct?#2020-10-0816:49Filipe Silvathe workaround is fine enough for me, but maybe you'd like more information about this?#2020-10-0817:40marshallnope, that’s enough thanks; we’ll investigate#2020-10-0820:28marshallI’ve reproduced this behavior and will report it to the dev team#2020-10-0816:34Filipe Silva@marshall correct#2020-10-0816:35Filipe Silvain a handler-fn for http-direct#2020-10-0816:36Filipe Silva@ghadi I replaced my (ex-data e) with this fn
(defn error->error-data [e]
;; Workaround for a difference in the live datomic system where clojure exceptions
;; are wrapped in a com.google.common.util.concurrent.UncheckedExecutionException.
;; To get the ex-data on live, we must convert it to a map and access :data directly.
(or (ex-data e)
(-> e Throwable->map :data)))#2020-10-0816:36Filipe SilvaI can confirm this gets me the anomaly for the live system#2020-10-0816:37Filipe Silvaslightly different than on the repl still#2020-10-0816:37Filipe Silvalive:
{:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "Invalid binding form: :entity/graph", :db/error :db.error/not-a-binding-form}
repl:
{:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message \"Invalid binding form: :entity/graph\", :db/error :db.error/not-a-binding-form, :dbs [{:database-id \"48e8dd4d-84bb-4216-a9d7-4b4d17867050\", :t 97901, :next-t 97902, :history false}]}#2020-10-0816:38Filipe Silvawhich makes sense, because in the live exception the :dbs property just isn't there#2020-10-0816:38Filipe Silvabut tbh that's the one that really shouldn't be exposed#2020-10-0816:38Filipe Silvaso that's fine enough for me#2020-10-0816:38Filipe Silvathank you#2020-10-0816:41Nassinis there are an official method to move data from dev-local to cloud?#2020-10-0819:54ChicãoDoes anyone know how I get the t from tx (d/tx->t tx), but my tx is a map and the error in the conversion?
{:db-before
java.lang.ClassCastException: clojure.lang.PersistentArrayMap cannot be cast to java.lang.Number#2020-10-0819:59csmYou need to grab the tx from a datom in :tx-data , in your case 13194139534369. I think something like (-> result :tx-data first :tx) will give you it#2020-10-0820:03csmI think also (-> result :db-after :basisT) will give you your new t directly#2020-10-0820:09Chicãothks#2020-10-0823:07steveb8nQ: I want to store 3rd party oauth tokens in Datomic. Storing them as cleartext is not secure enough so I plan to use KMS to symmetrically encrypt them before storage. Has anyone done something like this before? If so, any advice? Or is there an alternative you would recommend?#2020-10-0823:11steveb8nOne alternative I am considering is DynamoDB#2020-10-0823:11ghadihow many oauth keys? how often they come in/change/expire?#2020-10-0823:13steveb8nI provide a multi-tenant SAAS so at least 1 set per tenant#2020-10-0823:14steveb8nAlso looking at AWS Secrets Manager for this. Clearly I’m in the discovery phase 🙂#2020-10-0823:14steveb8nbut appreciate any advice#2020-10-0823:29ghadiinteraction patterns within KMS are not supposed to be for encryption/decryption of fine granularity items#2020-10-0823:30ghadiusually you generate key material known as a "DEK" (Data Encryption Key) using KMS#2020-10-0823:30ghadithen you use the DEK to encrypt/decrypt a bunch of data#2020-10-0823:30steveb8nok. I can see I’m going down the wrong path with Datomic for this data#2020-10-0823:31ghadithat's not the conclusion for me#2020-10-0823:31steveb8nit looks like Secrets Manager with a local/client cache is the way to do#2020-10-0823:31ghadiyou talk to KMS when you want to encrypt/decrypt the DEK#2020-10-0823:31ghadiso when you boot up, you ask KMS to decrypt the DEK, then you use the DEK to decrypt fine-grained things in the application#2020-10-0823:32ghadiwhere to store it (Datomic / wherever) is orthogonal to how you manage keys#2020-10-0823:32ghadiif you talk to KMS every time you want to decrypt a token, you'll pay a fortune and add a ton of latency#2020-10-0823:33ghadithe oauth ciphertexts could very well be in datomic#2020-10-0823:33steveb8nif I am weighing pros/cons of DEK/Datomic vs Secrets Manager, what are the advantages of using Datomic?#2020-10-0823:34ghadisecrets manager is for service level secrets#2020-10-0823:34steveb8nit seems like the same design i.e. cached DEK to read/write from Datomic#2020-10-0823:34ghadiyou could store your DEK in Secrets manager#2020-10-0823:34steveb8nthe downside would be no excision c.f. Secrets Manager#2020-10-0823:34ghadiyou cannot put thousands of oauth tokens in secrets manager#2020-10-0823:35steveb8nexcision is desirable for this kind of data#2020-10-0823:35ghadiwell, depending on how rich you are#2020-10-0823:35steveb8nI’m not rolling in money 🙂#2020-10-0823:35ghadiif you need to excise, you can throw away a DEK#2020-10-0823:35steveb8nhmm. is 1 DEK per tenant practical?#2020-10-0823:36ghadiI would google keystretching, HMAC, hierarchical keys#2020-10-0823:36steveb8nseems like same scale problem#2020-10-0823:36ghadiyou can have a root DEK, then create per tenant DEKs using HMAC#2020-10-0823:36ghadideteministically#2020-10-0823:36steveb8nok. that’s an interesting idea. a mini DEK chain#2020-10-0823:37ghaditenantDEK = HMAC(rootDEK, tenantID)#2020-10-0823:37steveb8nthen the root is stored in Secrets Manager#2020-10-0823:37ghadiright#2020-10-0823:37steveb8nwhere would the tenant DEKs be stored?#2020-10-0823:37ghadineed to store an identifier so that you can rorate the DEK periodically#2020-10-0823:37ghadiyou don't store the tenant DEKs#2020-10-0823:37ghadiyou derive them on the fly with HMAC#2020-10-0823:38steveb8nok. I’ll start reading up on this. thank you!#2020-10-0823:38ghadisure. with HMAC you'll have to figure out a different excision scheme#2020-10-0823:38ghadiyou could throw away the ciphertext instead of the DEK#2020-10-0823:38ghadibecause you can't throw away the DEK (you can re-gen it!)#2020-10-0823:38ghadietc.#2020-10-0823:39ghadibut yeah db storage isn't your issue :)#2020-10-0823:39ghadikey mgmt is#2020-10-0823:39steveb8ninteresting. that means Datomic is no good for this i.e. no excision#2020-10-0823:39steveb8nor am I missing a step?#2020-10-0823:39ghadiare you using cloud or onprem?#2020-10-0823:40steveb8ncloud / prod topo#2020-10-0823:40ghadistay tuned#2020-10-0823:40steveb8nnow that’s just not fair 🙂#2020-10-0823:41steveb8nI will indeed#2020-10-0823:41ghadihow often does a tenant's 3p oauth token change?#2020-10-0823:41steveb8nIt’s a Salesforce OAuth so the refresh period is configurable I believe. would need to check#2020-10-0823:42steveb8ni.e. enterprise SAAS is why good design matters here#2020-10-0823:42steveb8nI’ll need to build a v1 of this in the coming weeks#2020-10-0823:50steveb8nnow that I think about it, I could deliver an interim solution without this for a couple of months and “stay tuned” for a better solution#2020-10-0823:50steveb8nI’ll hammock this…#2020-10-0823:50steveb8n🙏#2020-10-0911:52ziltiIs there a way to query for entities that don't have a certain attribute? Something like "show me all entities that have a :company/id but don't have a :company/owner"#2020-10-0911:53manutter51Check out missing? in the query docs#2020-10-0913:59souenzzo@zilti you can also do
[?e .....]
(not [?e :attr])
#2020-10-0919:33souenzzoLooks like that datomic-peer do not respect socks proxy JVM props -DsocksProxyHost=127.0.0.1 -DsocksProxyPort=5000
Is it a know issue? slurp respect this settings both for dns resolution and packages.
Datomic do not respect the proxy for names resolution
I can't know about packages#2020-10-0920:33ChicãoHi, I want to restore my backup db then I run bin/transctor the transactor
datomic-pro-0.9.5561 bin/transactor config/dev-transactor-template.properties
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:>, storing data in: data ...
System started datomic:>, storing data in: data
and I ran this command and I got an error
datomic-pro-0.9.5561 bin/datomic restore-db backup.tgz datomic:
java.lang.IllegalArgumentException: :storage/invalid-uri Unsupported protocol:
at datomic.error$arg.invokeStatic(error.clj:57)
at datomic.error$arg.invoke(error.clj:52)
at datomic.error$arg.invokeStatic(error.clj:55)
at datomic.error$arg.invoke(error.clj:52)
at datomic.backup$fn__19707.invokeStatic(backup.clj:306)
at datomic.backup$fn__19707.invoke(backup.clj:304)
at clojure.lang.MultiFn.invoke(MultiFn.java:233)
Can someone help me?#2020-10-0920:34marshallyour backup (source) needs to be an unzipped backup, not a tar#2020-10-0920:35ChicãoI got the same error when I unzipped#2020-10-0920:35marshalluntarred/unzipped#2020-10-0920:35marshallit should be a directory#2020-10-0920:36marshalla top level dir with roots and values dirs inside of it#2020-10-0920:38Chicãobackup ls
owner roots values
#2020-10-0920:38marshallah#2020-10-0920:38marshallyou need to make a URI for it#2020-10-0920:38marshallsorry#2020-10-0920:38marshallit will be like: file:///User/Home/backup/#2020-10-0920:39marshallhttps://docs.datomic.com/on-prem/backup.html#uri-syntax#2020-10-0920:42Chicãoit worked#2020-10-0920:42Chicãothanks !#2020-10-0920:42marshallno problem#2020-10-0922:35ziltiOkay, I don't get it... or-join works completely different from what I expect. When there's no result fulfilling any of the clauses in or-join it will match everything. Is that on purpose? How can I avoid that?#2020-10-0922:53ziltiI thought this:
(d/q '[:find ?eid .
:in $ ?comp-domain ?comp-name
:where
(or-join [?eid]
[?eid :company/name ?comp-name]
[?eid :company/domain ?comp-domain])]
db comp-domain (:company/name data)))
Would be equivalent to this:
(or (d/q '[:find ?eid .
:in $ ?comp-domain ?comp-name
:where
[?eid :company/domain ?comp-domain]]
db comp-domain))
(d/q '[:find ?eid .
:in $ ?comp-domain ?comp-name
:where
[?eid :company/name ?comp-name]]
db (:company/name data))))
But it is not.#2020-10-0922:57Lennart BuitIn the first query, you are getting all ?eid s because the or-join you specify does not unify with ?comp-name nor ?comp-domain. So, practically, the ?comp-domain/`?comp-name` in your :in clause are not the same as the ones you use in the or branches of your or-join#2020-10-0922:58Lennart BuitSo your first query now says “Give me al entity ids of entities that have either a name, or a domain”, the bindings in your :in make no difference#2020-10-0922:59Lennart BuitIf you change (or-join [?eid] ...) to (or-join [?eid ?comp-domain ?comp-name] ...), do you get what you want?#2020-10-0923:00ziltiI'm trying...#2020-10-0923:01ziltiYes, that gives me an empty result, which is correct in this case#2020-10-0923:02ziltiAnd it works for a valid binding too. Thanks! I misinterpreted how that first vector works in or-join, I thought that is to declare the common variable.#2020-10-0923:03Lennart BuitIt declares what variables from outside the or-join to unify with ^^#2020-10-0923:04zilti🙂#2020-10-0923:03steveb8nQ: what’s the best way to query for the most recently created entity (with other conditions) in Datalog?#2020-10-0923:04steveb8nI can include a :where [?e :some/attr _ ?t] and then sort all results by ?t but it feels like there must be some way to use max to do this#2020-10-0923:04Lennart BuitHere are some interesting time rules: https://github.com/Datomic/day-of-datomic/blob/master/tutorial/time-rules.clj maybe that helps ^^#2020-10-0923:06steveb8nperfect! thank you 🙂#2020-10-0923:08Lennart BuitNot sure if your use is exactly in there, but I often use it as a reference if I want to find something history related 🙂#2020-10-0923:08steveb8nit’s a good start. I’ll be able to make it work from this#2020-10-0923:09Lennart Buithaha, teach a man to fish, and all 🙂#2020-10-0923:09steveb8nexactly. you supplied the bait#2020-10-1111:03joshkhi'm curious if anyone here makes use of recursion in their queries? i tend to find myself thinking "ah, recursion can solve this problem!" but then later i find myself implementing some tricky manipulations outside of the query to get back the results i want. for example, if i want to find the top level post of a nested post, then i have to walk the resulting tree to its maximum depth, which of course is pretty quick, but does not feel elegant.
; the "Third Level" post
(d/q '{:find [(pull ?post [:post/id :post/text {:post/_posts ...}])]
:in [$]
:where [[?post :post/id #uuid"40b8151d-d5f4-45a6-b78c-67655cdf1583"]]}
db)
=>
; and the top post being the most nested
[[{:post/id #uuid"40b8151d-d5f4-45a6-b78c-67655cdf1583",
:post/text "Third Level",
:post/_posts {:post/id #uuid"9209c1c6-d553-4632-848a-d9929fd7652a",
:post/text "Second Level",
:post/_posts {:post/id #uuid"0b15be5d-84f2-45d1-8b44-b9928d67f388",
:post/text "Top Level"}}}]]
#2020-10-1111:42schmeeyou can also implement recursion with rules: https://docs.datomic.com/cloud/query/query-data-reference.html#rules#2020-10-1111:43schmeeit might be easier to write a rule to get the id of the nested post, and then do a pull on that id#2020-10-1111:43schmeeinstead of pulling the whole thing#2020-10-1112:11joshkhhmm. i'm not sure how i would write a recursive rule though, e.g. one that unifies on a top level post in the example above#2020-10-1112:12joshkhthen again i haven't given it much thought. i'll play around and see what i can come up with. thanks for the advice 🙂#2020-10-1112:13teodorluIf you use recursion without a recursion limit, you open yourself to an infinite loop. Perhaps it makes sense to have a hard-coded recursion limit regardless.#2020-10-1112:14joshkhoh yes, i've been down that road before#2020-10-1209:46steveb8nI have a tree read at the core of my app. It's was tricky to make it perform well and it's still not super fast. I'm using recursive rules. My summary: it's possible but non-trivial#2020-10-1210:07joshkhdoes Datomic Cloud have a REST API similar to on-prem? https://docs.datomic.com/on-prem/rest.html#2020-10-1215:41vnczI am almost sure it does not; you can build one with Java/Clojure that would internally use the client API#2020-10-1216:04joshkhi haven't looked through the code, but this NodeJS library claims to be cloud compatible, so perhaps there is an accessible rest API? https://github.com/csm/datomic-client-js#2020-10-1212:49vnczAre EntityIDs in Datomic something we can use as "user space" identifiers or shall we use our own?#2020-10-1212:51marshallyou should use domain identifiers#2020-10-1212:51marshallthere are a variety of reasons not to expose entity IDs to your applications layers as user-space identifiers#2020-10-1212:52marshallif there isn’t a straightforward choice for an identifier in your particular domain, you can always generate a UUID for entities and just use that#2020-10-1213:43vnczIs there a way to get the created id without having to re-query? I can see there is a tempId field but it's empty#2020-10-1213:43vnczThis is the current schema I am using#2020-10-1213:53vnczOh ok my mistake, it is not autogenerated, I'm still responsible for generating it#2020-10-1214:04marshallright, and the :tempids map will return the mapping between actual assigned entity IDs and the tempids you supply#2020-10-1214:04marshallwhen/if that’s relevant#2020-10-1214:15vnczOk, I guess I'll have to use a regular java.uuid to get my number; was hoping Datomic would handle the ids for me somehow but that ain't a problem#2020-10-1212:52vnczFair. Thanks.#2020-10-1212:53marshall👍#2020-10-1215:42vnczWhat is the best practice about the schema? Do you usually transact it every time the application starts? Or only when running a "migration"?#2020-10-1215:55joshkhcan i use :db/cas to swap an attribute value for one that is missing to one that is present (and vice versa), or is it only compatible with swapping two "non-nil" values?#2020-10-1215:59joshkh(d/transact conn {:tx-data [[:db/cas 12345 :reader/nickname nil "joshkh"]]})
=>
entity, attribute, and new-value must be specified
i suspect i'll have to roll out my own transactor function for that?#2020-10-1216:53benoitThe old value can be nil (per doc): "You can use nil for the old value to specify that the new value should be asserted only if no value currently exists."#2020-10-1309:52joshkhhuh, thanks for pointing that out. i thought i tried that... sure enough nil to non-nil works. thanks!#2020-10-1218:10ChicãoSomeone can help me? I want to restore my backup but I've got this problem
java.lang.IllegalArgumentException: :restore/collision The name 'db-dev' is already in use by a different database
#2020-10-1218:17ChicãoI deleted folder data from datomic/#2020-10-1303:41vnczWhat error is this? Why am I only limited to use find-rel ?#2020-10-1305:08kennyDatomic client API doesn't support all the find specs that the peer API supports. See https://docs.datomic.com/cloud/query/query-data-reference.html#find-specs for what is supported.#2020-10-1312:45vncz@U083D6HK9 So shall I use the peer server in case I'd like to do such query?#2020-10-1303:41vnczThis is the query I am trying to run#2020-10-1312:57marshall@vincenz.chianese change your find to: :find ?name ?surname#2020-10-1312:57marshallthen you can manipulate the collection(s) returned in your client application if necessary#2020-10-1313:29vnczYeah, I was trying to avoid such boilerplate per each query @marshall#2020-10-1313:29vnczBecause I'm receiving something like [[{"name": "name", "surname": "surname"}]]#2020-10-1313:30vnczThat's kind of weird as structure (although I am sure there's a reason for that#2020-10-1313:33marshallyou could pull in the find#2020-10-1313:34marshalli.e. :find (pull ?id [:name :surname])#2020-10-1313:43vnczI tried that but I think that still gave me a weird structured result#2020-10-1313:50vnczIndeed: [[{"person/name":"Porcesco","person/surname":"Gerbone"}]]#2020-10-1313:51vnczThat's the same result I'm regularly getting using the regular query#2020-10-1319:07SvenAfter recently changing a backend stack from
AWS Appsync -> AWS Lambda -> Datomic ions ->
to AWS Appsync -> HTTP direct -> API Gateway -> Datomic ions I am now getting errors like
Syntax error compiling at (clojure/data/xml/event.clj:1:1)
java.lang.IllegalAccessError: xml-str does not exist
Syntax error compiling at (clojure/data/xml/impl.clj:66:12).
java.lang.IllegalAccessError: element-nss does not exist
Syntax error compiling at (******/aws/cognito.clj:20:38).
java.lang.RuntimeException: No such var: aws/invoke
They happen every now and then with seemingly no way to reliably reproduce them and never happened when calling ions via Lambdas.
I have updated ion, ion-dev, client api, datomic storage and compute to latest as of current date with no effect.
Does anyone have ideas where to look for hints or what could be a cause for such behaviour?#2020-10-1319:27SvenThere is one major change compared to the Lambda configuration - I am now resolving functions in other namespaces based on routes. Could this have any effect and if so then why? There are no errors with resolving these functions tough.#2020-10-1319:29Alex Miller (Clojure team)those all look like they could be a case of not having the expected version of a dependency (OR that it's asynchronously loading and you're seeing partial state)#2020-10-1319:29Alex Miller (Clojure team)when you resolve functions, how are you doing it?#2020-10-1319:29Alex Miller (Clojure team)I would recommend using requiring-resolve#2020-10-1319:31SvenI parse a string to a symbol and then resolve it e.g. (when-let [f (resolve 'app.ions.list-something/list-something)] (f args))#2020-10-1319:32Alex Miller (Clojure team)so you're not ever dynamically loading namespaces?#2020-10-1319:33Alex Miller (Clojure team)I mean, where does 'app.ions.list-something coming from? is that a dynamic value?#2020-10-1319:35SvenI get a string from a route e.g. list-something and then I convert it into a symbol app.ions.list-something/list-something and then resolve it. Just like in the datomi cion starter example https://github.com/Datomic/ion-starter/blob/7d2a6e0bda89ac3bb4756501c3ada3d1fbc80c1a/src/datomic/ion/starter.clj#L26#2020-10-1319:40SvenFixed my examples 😊. I guess I’ll try requiring-resolve .#2020-10-1319:42Svenand I am also requiring the namespace dynamically just like in that example (-> ion-sym namespace symbol require)#2020-10-1319:46SvenThis is my http direct handler fn
(defn handler
[{:keys [uri] :as req}]
(try
(let [arg-map (-> req parse-request validate authenticate)
{:keys [ion-sym]} arg-map]
(-> ion-sym namespace symbol require)
(let [ion-fn (resolve ion-sym)]
(when-not ion-fn
(throw (ex-info ...)))
(ion-fn arg-map)))
(catch ....)))#2020-10-1320:05Alex Miller (Clojure team)yeah, I would strongly recommend requiring-resolve - it uses a shared loading lock#2020-10-1320:48SvenI changed resolve -> requiring-resolve . The issue still persists with the exception that now only specific namespaces fail and in almost 100% of cases. What makes them different is that they implement cognitect aws api and fail at cognitect/aws/client.clj 😕#2020-10-1320:57Alex Miller (Clojure team)what does "they implement cognitect `aws api` " mean? they == what? implement == what?#2020-10-1320:58Alex Miller (Clojure team)aws api does do some dynamic loading but should be doing safer things already#2020-10-1321:16SvenIf I resolve and execute a symbol that uses cognitect.aws.client.api to invoke an operation on an AWS service then I always get Syntax error compiling at (cognitect/aws/client.clj…).
I added (:require [cognitect.aws.client.api]) to the handler namespace and seem to get no syntax error compiling at errors anymore. I guess it’s a fix for now.#2020-10-1321:35Alex Miller (Clojure team)yeah, don't know off the top of my head but that would have been my suggestion#2020-10-1322:53Brandon OlivierDoes the Datomic client lib support fulltext ?#2020-10-1400:19joshkhis it normal for SOCKS4 tunnel failed, connection closed to occur when running a query in the REPL during a :deploy?#2020-10-1402:51steveb8nQ: is there a metric somewhere that shows the hit-rate for cached queries? I’d like to know if I accidentally add queries that are not cached in their parsed state#2020-10-1412:41motformI have a question about using :db/ident as an enum. In my model, a :session/type is modelled as an enum of :db/idents, which works great when writing queries. However, there are times when I want to return the :session/type to be consumed as a value like :type/online, but I get the datomic id instead. Is there a way to get idents as values or should I just use a keyword instead?#2020-10-1413:38Lennart BuitYou can just pull them: [{:session/type [:db/ident]} ...rest]#2020-10-1413:42Lennart BuitYou can also just join them in your queries, say because you are finding tuples of entities and statuses:
(d/q '[:find ?e ?type-ident
:in $ ?e
:where [?e :session/type ?type]
[?type :db/ident ?type-ident]
...)#2020-10-1413:42Lennart BuitDoes that help ^^?#2020-10-1413:43motformYes, that was exactly was I was wondering about. Thank you!#2020-10-1416:09joshkhjust one thing to note, that will only return entities that have a :session/type value. i made a similar post about it here: https://forum.datomic.com/t/enumerated-values-in-tuples-are-only-eids/1644#2020-10-1416:11joshkhno response in 11 days, so if you would find an answer to the question useful then perhaps give it a bump or a like 🙂#2020-10-1417:41souenzzo@love.lagerkvist you can do (d/pull db [(:session/type :xform :db/ident)] id) => {:session/type :type/online}
Using #eql libraries you can programmatically add it to all your refs
(defn add-ident-xform
[ident? query]
(->> query
eqld/query->ast
(eql/transduce-children (map (fn [{:keys [dispatch-key] :as node}]
(if (ident? dispatch-key)
(assoc-in node [:params :xform] :db/ident)
node))))
eqld/ast->query))
(add-ident-xform
#{:session/type}
'[:foo
{:bar [:session/type]}])
;; => [:foo {:bar [(:session/type :xform :db/ident)]}]
But as a #fulcro and #eql developer, I like to return :session/type {:db/ident :type/online} because it allow you to include useful data for frontend like :session/type {:db/ident :type/online :label "OnLine" :icon "green-dot"}#2020-10-1423:57onetomWe had the impression that sometimes the ion code we deploy using the datomic CLI command takes awhile (a few minutes) to actually replace the previously running version.
We are using unreproducible deployments into a solo topology.
The issue is with a web-ion GET request, which is called through an APIGW, using their new HTTP API (instead of RESTful API) and integrating the datomic lambda as a proxy, using the v1.0 payload format.
All versions of tools and libs are the latest (as of yesterday).
Has anyone experienced anything like this?#2020-10-1423:59onetomThe :deploy-status reports SUCCESS for both keys in its response of course.#2020-10-1500:17steveb8n@onetom not sure what problem you are describing here. one useful tool is to watch the deploy in the AWS console in “Code Deploy”. That can provide useful info#2020-10-1500:18onetomthanks!
i never looked at that console yet, only the various cloudwatch logs.#2020-10-1500:22steveb8nsure. are you also enabling api-gw level logging (as well as lambda/ion logs)? I have debugged many issues with that level of detail#2020-10-1501:45onetomis there a way to deploy ions from a clojure repl?
i tried this:
(datomic.ion.dev/-main
(pr-str
{:op :push
:uname "grace"
:region "ap-southeast-1"
:creds-profile "gini-dev"}))
but it quits my repl after executing the operation.
it would be nice to just expose the function which gets called with that map provided as a command line argument and return the printed result map, so i can just grab the :deploy-command (or rather just the map itself, which describes the next operation)#2020-10-1504:40steveb8nhttps://github.com/jacobobryant/trident/blob/master/src/trident/ion_dev/deploy.clj#2020-10-1504:40steveb8nhttps://gist.github.com/jacobobryant/9c13f4cd692ff69d8f87b0d872aeb64e#2020-10-1513:14joshkhhere's what i use, which let's me deploy from the command line to a provided group name via an alias:
$ clj -Adeploy-to-aws some-group-name
https://gist.github.com/joshkh/3455a6905517a814b4623d01925baf0e#2020-10-1900:20Joe R. SmithThere is. 🙂
You can use the functions push and deploy in the namespace datomic.ion.dev
Here's some code from my dev ns on a project. I think you can visually extract the important bits and ignore the project-specific ones:
(defn deploy-unrepro-build!
([]
(deploy-unrepro-build! nil))
([system-config-overrides]
(deploy-unrepro-build! system-config-overrides
(str "dev-" (java.util.UUID/randomUUID))))
([system-config-overrides uname]
(let [system-config (system/get-config system-config-overrides)]
(ion/push {:uname uname})
(ion/deploy {:group (:pawlytics/deployment-group system-config)
:uname uname}))))
(defn deploy-rev-build!
([rev] (deploy-rev-build! rev nil))
([rev system-config-overrides]
(let [system-config (system/get-config system-config-overrides)]
(ion/push {:rev rev})
(ion/deploy {:group (:pawlytics/deployment-group system-config)
:rev rev}))))
(defn deploy-current-rev-build!
([]
(deploy-current-rev-build! nil))
([system-config-overrides]
(deploy-rev-build! (-> (shell/sh "git" "rev-parse" "HEAD")
:out
str/trim-newline)
system-config-overrides)))
#2020-10-1900:21Joe R. Smithwarning though: repro builds don't check that the working directory is clean like they do using the clj command.#2020-10-1911:40onetomthanks everyone!
I will give these a try!#2020-10-1508:06ErweeHey, coming from a typical app, if you have heavy read operations, you could spin up a sql read only replica and point your data guys there, safely knowing you wont topple your prod db.
How is this generally solved in the on prem postgresql storage datomic world? Something like memcache wont offer much value, its one off huge queries being run.#2020-10-1518:56favilaHow big is your table in bytes and what is your current read/write to Postgres? The load on storage is purely IO—Postgres is basically used as a dumb key-value store. It seems very unlikely (but not impossible) that this is going to be a problem.#2020-10-1519:01ErweeWe were in DDB, but it became too expensive, so we moved to PostgreSQL. DB is 50GB on clean restore, up to 100 GB, as we need to vacuum the storage#2020-10-1519:02ErweeIts a theoretical at this point, postgresql is more than capable, im just curious what the options are, or what other folks are doing for this kind of thing 🙂#2020-10-1623:42favilaIf the segment churn on your db is low, consider keeping a valcache volume around and remounting it for this big query job#2020-10-1518:15donyormI'm trying to use dev-local, but I'm getting this error: Unable to load client, make sure com.datomic/client-impl-local is on your classpath. Is there a way to add client-impl-local directly to the classpath? I'm not seeing a maven dependency online anywhere#2020-10-1518:48kennyIt’s probably a transitive dep. Are you on the latest Clojure CLI version @U1C03090C ? I know there was an issue with an older Clojure CLI version and dev-local. #2020-10-1518:56donyormYeah I have the same issue with the newest CLI. I tried looking for the dependency transitively with -Stree, but it wasn't there#2020-10-1519:06kennyHmm. Well you should not need to add that dep to your cp. Can you paste your -Stree you used to launch your REPL?#2020-10-1520:57donyormHere's what I got#2020-10-1521:39kennyHmm. Not sure. Guessing someone with deeper knowledge of dev-local needs to help here.#2020-10-1521:40donyormWell thanks for the attempt anyway. I appreciate it!#2020-10-1521:40kennyOnly other thing that might help is posting a snippet of what you're doing to get the error.#2020-10-1521:41kennyFwiw, this is what my dev-local looks like from -Stree
com.datomic/dev-local 0.9.203
com.google.errorprone/error_prone_annotations 2.3.4
com.datomic/client-api 0.8.54
com.google.guava/listenablefuture 9999.0-empty-to-avoid-conflict-with-guava
com.datomic/client 0.8.111
com.cognitect/http-client 0.1.105
org.eclipse.jetty/jetty-client 9.4.27.v20200227
org.checkerframework/checker-compat-qual 2.5.5
com.google.guava/failureaccess 1.0.1
com.google.guava/guava 28.2-android
com.datomic/client-impl-shared 0.8.80
com.cognitect/hmac-authn 0.1.195
com.google.j2objc/j2objc-annotations 1.3
com.datomic/query-support 0.8.27
org.fressian/fressian 0.6.5
com.google.code.findbugs/jsr305 3.0.2
org.ow2.asm/asm-all 4.2#2020-10-1521:41kennyI don't see any dep in my -Stree for com.datomic/client-impl-local#2020-10-1521:45donyormYeah I wonder why it's looking for that#2020-10-1521:46donyormoh I used the wrong type of :server-type in the config. I did :local instead of :dev-local . That would do it#2020-10-1520:50Lennart BuitLittle data modeling question: Say that I have a category with tags, and these tags are component/many of this category. Now, I’d like to add a composite (tuple) key to this tag entity that says [tagName, category] is unique, but there is no explicit relation from tag -> category. Do I have to reverse this relation / lose the component-ness to add this composite key?#2020-10-1522:03Brandon OlivierI’m trying to do a fulltext search on my Datomic instance, but I get this error:
The following forms do not name predicates or fns: (fulltext)
Anybody know why that might be? I’m following straight from what’s in the docs#2020-10-1522:15Lennart BuitAre you using on prem, or cloud?#2020-10-1615:19Brandon Olivier@UDF11HLKC This is local. It should be the on-prem version, but I’m connecting via the datomic api client.#2020-10-1615:20Lennart BuitIirc you can’t use fulltext from the client api#2020-10-1618:30Brandon OlivierThat was my suspicion, but I couldn't confirm. So I need to convert my application to use the peer server internally?#2020-10-1618:30Brandon Olivieror I guess "should", not "need"#2020-10-1619:34Lennart BuitThat depends on what you’d like to achieve. If your goal is to move to the cloud at some point, you may want to consider sticking with the client API and instead using some other store for your full text needs.
Here is a thread on the datomic forums about it: https://forum.datomic.com/t/datomic-fulltext-search-equivalent/874/6#2020-10-1619:34Lennart BuitI can’t decide what your architecture should look like, but this is advice I’ve seen before 🙂#2020-10-1616:29roninhackerWe're running with datomic on the backend and datascript on the front end. I'd like to just mirror datomic ids on the client side, but datascript uses longs for its ids, which means some datomic ids don't fit. Are there datomic settings that can constrain the id space? (if not I'll just write some transformation glue on the FE)#2020-10-1618:30favilaDatomic also uses longs....#2020-10-1618:32favilaDo you mean doubles? Are you worried about the 52 bit integer representation limit?#2020-10-1622:24roninhackerah, yes, I guess it's not strictly datatype I'm worried about, but datascript's id limit of 2147483647#2020-10-1622:26roninhacker(which seems to be 2^31 -1 )#2020-10-1623:17favilaD/tx->t will give you a 42 bit unique id per entity, which is extremely likely to be < 32 bits u less you have a huge number of entities or transactions. Maybe that’s useful info for some clever encoding scheme#2020-10-1623:44roninhackerhmmm, thank you#2020-10-1716:31kennyI have a query that looks like this .
'[:find ?r
:in $ ?c [?cur-r ...]
:where
[?c ::rs ?r]]
I'd like to restrict ?r to be all ?r's that are not in ?cur-r. Is there a way to do this?#2020-10-1716:38kennyI could make ?cur-r a data source but that requires me to have ?cur-r db ids for cur-r. Currently only have a list of lookup refs.
'[:find ?r
:in $ $cur-r ?c
:where
[?c ::rs ?r]
(not [$cur-r ?r])]
#2020-10-1716:41Lennart BuitMaybe (not [(identity ?cur-r) ?r]) works#2020-10-1716:42kennyReturns all ?r's#2020-10-1716:43kenny?cur-r is passed in as a list of lookup refs#2020-10-1716:46kennyI may just have to convert the ?cur-r lookup refs to eids. Not a big deal but it seems like there should be a way to make this happen in a single query 🙂#2020-10-1718:11favila(not [(datomic.api/entid $ ?cur-r) ?r])#2020-10-1718:12kennyOoo, nice! Is datomic.api documented somewhere?#2020-10-1718:12favilaIt’s the peer api#2020-10-1718:12kennyOh, right - I'm on cloud.#2020-10-1718:13favilaIt might still be there#2020-10-1718:13kennyPerhaps. Not documented though: https://docs.datomic.com/client-api/datomic.client.api.html#2020-10-1718:13favilaIt would just be on the server’s classpath#2020-10-1718:14kennyYeah. Curious if that's able to be depended on though haha.#2020-10-1718:18favilaIf you know they are lookup refs you can decompose and resolve the ref#2020-10-1718:19favilaIf not you could reimplement entid as a rule#2020-10-1718:20favila:in [[?cur-a ?cur-v]] :where [?cur-r ?cur-a ?cur-v]#2020-10-1718:24kennyAh, that seems like it’d work! Will try it in a bit. Thanks @U09R86PA4#2020-10-1718:24favilaAs a rule [[(entid [?x] ?eid)[(vector? ?x)]...] [(entid [?x] ?eid) [(int? ?x)][(identity ?x) ?eid]] and a keyword case looking up ident#2020-10-1718:25favilaSorry I can’t type out the whole thing, on a phone#2020-10-1719:53ChicãoHi, does anyone know how to solve this problem?
bin/transactor config/dev-transactor-template.properties
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc driver
...
System started datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc d
river
Terminating process - Lifecycle thread failed
java.util.concurrent.ExecutionException: org.postgresql.util.PSQLException: ERROR: relation "datomic_kvs" does not exist
Position: 31
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at clojure.core$deref_future.invokeStatic(core.clj:2300)
at clojure.core$future_call$reify__8454.deref(core.clj:6974)
at clojure.core$deref.invokeStatic(core.clj:2320)
at clojure.core$deref.invoke(core.clj:2306)
at datomic.lifecycle_ext$standby_loop.invokeStatic(lifecycle_ext.clj:42)
at datomic.lifecycle_ext$standby_loop.invoke(lifecycle_ext.clj:40)
at clojure.lang.Var.invoke(Var.java:384)
at datomic.lifecycle$start$fn__28718.invoke(lifecycle.clj:73)
at clojure.lang.AFn.run(AFn.java:22)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.postgresql.util.PSQLException: ERROR: relation "datomic_kvs" does not exist
Position: 31#2020-10-1719:54Chicãothis is my config/properties...
protocol=sql
host=localhost
port=4334
sql-url=jdbc:
sql-user=datomic
sql-password=datomic
sql-driver-class=org.postgresql.Driver
#2020-10-1720:04ChicãoI solve this problem running
CREATE TABLE datomic_kvs (id text NOT NULL, rev integer, map text, val bytea, CONSTRAINT pk_id PRIMARY KEY (id)) WITH (OIDS = FALSE);
ALTER TABLE datomic_kvs OWNER TO datomic; GRANT ALL ON TABLE datomic_kvs TO datomic; GRANT ALL ON TABLE datomic_kvs TO public;
#2020-10-1721:34cjmurphyhttps://docs.datomic.com/on-prem/storage.html#sql-database#2020-10-1721:34cjmurphyI can see that's where creating that table is documented.#2020-10-1805:54zhuxun2In datomic, is there a way to enforce that a relation be one-to-many as opposed to many-to-many? For example, setting :folder/files to have :db.cardinality/many does not prohibit the co-existence of [x :folder/file k] and [y :folder/file k].#2020-10-1809:44joshkhif i'm reading that correctly, then file k can only exist in one :folder/files relationship, correct?
you could put a :db.unique/value constraints on :folder/files
(d/transact (client/get-conn)
{:tx-data [#:db{:ident :some/id
:valueType :db.type/string
:cardinality :db.cardinality/one
:unique :db.unique/identity}
#:db{:ident :folder/files
:valueType :db.type/ref
:cardinality :db.cardinality/many
:unique :db.unique/value}]})
here is a folder with two files:
(d/transact (client/get-conn)
{:tx-data [{:db/id "file1"
:some/id "file1"}
{:db/id "file2"
:some/id "file2"}
; file1 and file2 are part of folder1
{:db/id "folder1"
:some/id "folder1"
:folder/files ["file1" "file2"]}]})
=> Success
then adding file1, which is already claimed by folder1, throws an exception when adding it to a new folder2:
(d/transact (client/get-conn)
{:tx-data [{:db/id "folder2"
:some/id "folder2"
:folder/files [{:some/id "file1"}]}]})
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
Unique conflict: :folder/files, value: 30570821298684032 already held by: 44692948645838978 asserted for: 69889357107953795#2020-10-1909:40schmeecan I write a shortest path query in Datomic, e.g can I determine if it is possible to navigate from Entity A to Entity B via some reference attribute?#2020-10-1917:03favilaHere is a trivial example:#2020-10-1917:03favila'[[(path-exists? ?e1 ?a ?e2)
[?e1 ?a ?e2]]
[(path-exists? ?e1 ?a ?e2)
[?e1 ?a ?e-mid]
[(!= ?e2 ?e-mid)]
[(path-exists? ?e-mid ?a ?e2)]]]
#2020-10-1917:04favilathe general pattern with recursive rules is to define the rule multiple times, and have one that is terminal, and the rest recursive, and (generally but not required) the rule impls match disjoint sets.#2020-10-1917:05favilaunfortunately there’s no “cut” to stop evaluation early. I’m pretty sure this example will exhaustively discover every possible path, even though any one will do. However, it may discover them in parallel.#2020-10-1917:13schmeethank you for the detailed example! 🙂#2020-10-1917:18favilaNote this example only searches refs in a forward direction. With two additional implementations, it could search backwards also#2020-10-1909:41schmeeI’ve looked at all the examples of recursive rules that I could find and they all “hardcode” the depth of the search (such as the MBrainz example: https://github.com/Datomic/mbrainz-sample/blob/master/src/clj/datomic/samples/mbrainz/rules.clj#L37)#2020-10-1916:31kennyI would've expected the below query to return all txes where ?tx is not in ?ignore-tx. I actually get all txes, as if the not is completely ignore. ?ignore-tx is passed in as a set of tx ids. Why would this happen?
'[:find ?t ?status ?tx ?added
:in $ [?ignore-tx ...]
:where
[?t ::task/status ?status ?tx ?added]
(not [(identity ?ignore-tx) ?tx])]
#2020-10-1916:38faviladatalog comparisons are not “type”-aware. are all ?ignore-tx actually tx longs and not some other representation?#2020-10-1916:38kennyYes
(type (first ignore-txes))
=> java.lang.Long
#2020-10-1916:38favilaare they T or TX?#2020-10-1916:39kennytx#2020-10-1916:39favila(both are longs, but TXs have partition bits)#2020-10-1916:39faviladoes this behave differently? [(!= ?ignore-tx ?tx)]#2020-10-1916:39favila(instead of (not …)#2020-10-1916:40kennySame result#2020-10-1916:41favilaprint (first ignore-txes) ?#2020-10-1916:41kenny(first ignore-txes)
=> 13194142112981
#2020-10-1916:43favilaand you’re actually sure this is in the result set? You can test with `
'[:find ?t ?status ?tx ?added
:in $ [?tx ...]
:where
[?t ::task/status ?status ?tx ?added]
]
#2020-10-1916:44kenny(d/q {:query '[:find ?t ?status ?tx ?added
:in $ [?ignore-tx ...]
:where
[?t ::task/status ?status ?tx ?added]
[(!= ?ignore-tx ?tx)]
[?tx :audit/user-id ?user]]
:args [(d/history (d/db conn))
#{13194142035321 13194142112981}]
:limit 10000})
=>
[[606930421025569 :cs.model.task/status-in-progress 13194142112981 false]
[606930421025569 :cs.model.task/status-in-progress 13194142035321 true]
[606930421025569 :cs.model.task/status-open 13194142112981 true]
[606930421025569 :cs.model.task/status-open 13194142035321 false]]#2020-10-1916:46kennyIdentical result with (not [(identity ?ignore-tx) ?tx]).#2020-10-1916:49favilaThat is really weird. I can’t reproduce with a toy example#2020-10-1916:49favila(d/q '[:find ?e ?stat ?tx ?op
:in $ [?ignore-tx ...]
:where
[?e :status ?stat ?tx ?op]
[(!= ?ignore-tx ?tx)]
]
[[1 :status :foo 100 true]
[1 :status :bar 100 false]]
#{100}
)#2020-10-1916:49favila=> #{}#2020-10-1916:50kennyYeah - that's what I would expect#2020-10-1916:53favilawhat about using contains?#2020-10-1916:53favila(d/q '[:find ?e ?stat ?tx ?op
:in $ ?ignore-txs
:where
[?e :status ?stat ?tx ?op]
(not [(contains? ?ignore-txs ?tx)])
]
[[1 :status :foo 13194142112981 true]
[1 :status :bar 13194142112981 false]
[1 :status :baz 13194142112982 true]]
#{13194142112981}
)
=> #{[1 :baz 13194142112982 true]}
#2020-10-1916:54favilaI’m just kinda probing to see if this is a problem with comparisons or something deeper#2020-10-1916:54kenny(d/q {:query '[:find ?t ?status ?tx ?added
:in $ ?ignore-tx
:where
[?t ::task/status ?status ?tx ?added]
(not [(contains? ?ignore-tx ?tx)])
[?tx :audit/user-id ?user]]
:args [(d/history (d/db conn))
#{13194142035321 13194142112981}]})
=> []#2020-10-1916:55kennyThat's the expected result. Still odd that the former didn't work.#2020-10-1916:57kennyEven odder is that it worked in your toy example.#2020-10-1916:57favilaI think that points to something funky with the numeric comparisons done by the datalog engine, like it’s using object identity or something.#2020-10-1916:58favilamy toy example used on-prem, but should be able to replicate with cloud or peer-server#2020-10-1917:00favilaI was using 1.0.6165#2020-10-1917:01kennyThis is using the client api 0.8.102 and connecting to a system running in the cloud.#2020-10-1917:08kennySeems to work as expected with dev-local as well.#2020-10-1917:16kennyDatomic Cloud includes :db-name and :database-id as get'able keys from a d/db. Are these part of the official API?#2020-10-1917:17kennye.g.,
(d/db conn)
=>
{:t 2580397,
:next-t 2580398,
:db-name "my-db",
:database-id "74353541-feea-4ea2-afa6-f522a169856d",
:type :datomic.client/db}#2020-10-1917:19kennyIt would appear so (for :db-name at least) https://docs.datomic.com/client-api/datomic.client.api.html#var-db#2020-10-1917:20kennyIf that is true, shouldn't dev-local support that? See below example using dev-local 0.9.203.
(def c2 (d/client {:server-type :dev-local,
:system "dev-local-bB7z07Io_A",
:storage-dir "/home/kenny/.datomic/data/dev-local-bB7z07Io_A"}))
(d/db (d/connect c2 {:db-name "cust-db__0535019e-79fe-44a1-a8d9-b19394abd958"}))
(:db-name *1)
=> nil#2020-10-1917:31kennyFairly certain this is a bug so I opened a support req: https://support.cognitect.com/hc/en-us/requests/2879#2020-10-2017:15daniel.spanieldoes datomic mem-db support tuple type ? i tried to add a tuple field and it barfed so not sure ?#2020-10-2017:54favilaThis should be dependent on datomic lib version, not storage type#2020-10-2018:06daniel.spaniellib version? where is that found ? we use cloud db for production#2020-10-2018:12favilahow do you create a mem db with cloud?#2020-10-2018:43daniel.spanielyou dont .. you use one or the other. looks like dev-local has some thing dev-local-tu for doing test like things where you blow away the db around each test, which is what we want. but i think mem-db does not support tuple#2020-10-2018:44favilaAFAIK before dev-local there were no mem-dbs with cloud#2020-10-2018:44favilaso I’m not sure what you are doing#2020-10-2018:46favilaif you use a peer-server with on-prem you could do it, but that depends on the peer lib’s version. There was also this: https://github.com/ComputeSoftware/datomic-client-memdb#2020-10-2018:48daniel.spanielthat the one we using, but we just run that locally , when on prod using cloud db , we switch between one and the other#2020-10-2018:49favilaso, that depends on an on-prem lib, and that on-prem lib’s version is what’s dictating whether tuples are supported or not (most likely)#2020-10-2018:49favilaI’m just saying there’s more to the story than “mem-db -> no tuple types”#2020-10-2018:51favilaon-prem 0.9.5927 added tuples: https://docs.datomic.com/on-prem/changes.html#0.9.5927#2020-10-2019:22kennyI imported a prod db via dev-local/import-cloud. Is there a way to get a breakdown of the size of the db.log file?#2020-10-2019:25kennyI'm also curious if import-cloud provides a way to import the current version of the database with no historical retracts.#2020-10-2019:52kennyAre there any issues with running multiple import-cloud in parallel?#2020-10-2021:36donyormSo I have an entity with a child with cardinality many, and I query for all entities where one of these child entities matches a value. I tried
'(or
(and
[?e :child-element-key ?ste]
[(.contains ^java.lang.String ?ste "value")]))
But that didn't work, is there another way to do this?#2020-10-2021:36donyormSo I have an entity with a child with cardinality many, and I query for all entities where one of these child entities matches a value. I tried
'(or
(and
[?e :child-element-key ?ste]
[(.contains ^java.lang.String ?ste "value")]))
But that didn't work, is there another way to do this?#2020-10-2021:39favilaclojure.core/list isn’t needed--you are already quoting#2020-10-2021:43donyormThanks, sorry I copied and modified this from my code where I wasn't quoting#2020-10-2021:47favilathis is generally how you do it; it’s going to be difficult to diagnose your problem without a complete example. You could try simplifying the query with specific data to see what’s going wrong. e.g.:
(d/q '[:find ?e
:where
(or
(and
[?e :child-element-key ?ste]
[(.contains ^java.lang.String ?ste "value")]))]
[[1 :child-element-key "value1"]
[2 :child-element-key "nope"]])
=> #{[1]}#2020-10-2021:48donyormOk thanks, I wasn't sure if I was completely off base, probably an issue in my data then. Thank you!#2020-10-2021:51donyormYes definitely was a problem in the data, thanks for the help though!#2020-10-2100:49Michael Stokleyare subqueries only possible with ions? i'm fooling around and i'm running into cognitect/not-found errors that tell me "'datomic/ion-config.edn' is not on the classpath"#2020-10-2100:49Michael Stokleyhere's the subquery i attempted:
(d/q `[:find ~'?contract ~'?latest-snapshot-tx-instant
:where
[~'?contract :contract/id]
[(datomic.client.api/q [:find (~'max ~'?snapshot-tx-instant)
:where
[~'?contract :contract/snapshots ~'?_ ~'?snapshot-tx]
[~'?snapshot-tx :db/txInstant ~'?snapshot-tx-instant]])
~'?latest-snapshot-tx-instant]]
db)
#2020-10-2100:57Joe Laneuse [(q @U7EFFJG73#2020-10-2101:02Michael Stokleythat gets me further... thank you!#2020-10-2101:10Michael Stokleyhere we are: https://docs.datomic.com/cloud/query/query-data-reference.html#q#2020-10-2101:26Michael Stokleya humble suggestion to whoever may have control over the documentation: i could not find the q function documentation when googling "datomic subquery"#2020-10-2104:46steveb8nQ: I want to use an API-Gateway custom authorizer (lambda) with Ions. The authorizer decorates the request which is passed through to the Ion Lambda (I’m not using http-direct yet). The auth data is in the lambda request “context”, not in headers. Using a ring handler which has been “ionized” I can’t figure out how to access that data. Has anyone got any experience with this?#2020-10-2105:02steveb8nI found the answer in the docs. the “requestContext” is in the web ion request#2020-10-2105:02steveb8nhttps://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html#api-gateway-simple-proxy-for-lambda-input-format#2020-10-2113:45jaretCognitect dev-tools version 0.9.51 now available
Version 0.9.51 of Cognitect dev-tools is now available for download.
See https://forum.datomic.com/t/cognitect-dev-tools-version-0-9-51-now-available/1666#2020-10-2113:57vncz@U1QJACBUM I can't find anything in the documentation about the MemDB feature; what's that about?#2020-10-2113:59jaret@U015VUATAVC Sorry the doc's cache wasn't busted#2020-10-2113:59jarethttps://docs.datomic.com/cloud/dev-local.html#memdb#2020-10-2114:28vnczAh ok sweet, thanks!#2020-10-2119:41zhuxun2What's the idiomatic way to
> retract (`:db.fn/retractEntity`) all the releases that have a particular given :release/artist, and return the list of :release/name's of the releases retracted, all done atomically
I know that the first part can be done with a transaction function, and the second part can be extracted manually from the "TX_DATA" key in the transaction report map. However, I found manual extraction to be too much dependent on the structure of the transaction report map, which seems to be rather subject to future changes. I was wondering if there's a more elegant way of doing this that I am not aware of.#2020-10-2120:08favilathe tx-data report map has been stable for years AFAIK. what difficulty are you encountering specificaly?#2020-10-2120:10favilaMy go-to strategy in this case would be to look in tx-data for datoms matching pattern [_ :release/name ?value _ false] . That will only tell you that a value was retracted, not that a retractEntity caused it. With some domain knowledge you could refine that further#2020-10-2120:10favila(note you’ll have to resolve :release/name to its entity id somehow)#2020-10-2120:18zhuxun2> look in tx-data for datoms matching pattern ...
Interesting ... does Datomic provide a mechanism to do this kind of matching against a list of datoms? @U09R86PA4#2020-10-2120:24zhuxun2@U09R86PA4 Or do I have to do (map #(nth 2 %) (filter (fn [[e a v t f]] (and (= :release/name a) (not f))) (:tx-data query-report)))?#2020-10-2120:27favilaThat should be enough; if you need more sophistication you can use d/q with the :db-before or :db-after dbs#2020-10-2120:29favilae.g., I want all release names for all entities which lose a release name but don’t gain a new one within a transaction (i.e. they only lose, not change their name):#2020-10-2120:30favila(d/q '[:find ?release-name
:in $before $txdata
:where
[$before ?release-name-attr :db/ident :release/name]
[$txdata ?e ?release-name-attr ?release-name _ false]
($txdata not [?e ?release-name-attr _ _ true])
]
(:db-before result) (:tx-data result))#2020-10-2120:30favila(untested)#2020-10-2120:40zhuxun2I think this is what I was looking for. Thanks!#2020-10-2120:52zhuxun2@U09R86PA4 Wait ... wasn't [$before ?release-name-attr :db/ident :release/name] implied? Why did you have to put it in the query?#2020-10-2120:55favila$txdata is a vector of datoms, which is a valid data source, but doesn’t understand that :release/name is an ident with an equivalent entity id. So just putting [$txdata _ :release/name] would never match anything#2020-10-2120:56favilathat you can say [_ :attr] or [_ :attr [:lookup "value"] or [_ :attr :ident-value] is magic provided by the datasource#2020-10-2120:56favilathe database “knows” that those map to entity ids and does it for you#2020-10-2120:57favilabut a vector datasource isn’t that smart#2020-10-2120:57favilaso you need to match the entity id of :release/name exactly#2020-10-2121:03zhuxun2This is very interesting detail. Thanks for the explanation.#2020-10-2207:34onetomTo make Ion deployment more comfortable, we started using the datomic.ion.dev/push and friends directly.
Those function however seem to shell out to call clojure and expect it to be available on the PATH.
Since we are using nix-shells to manage our development environments, we deliberately have no clojure on our PATH by default, only within direnv managed shells.
Would it be possible to just call the necessary functions directly from the JVM process which runs the datomic.ion.dev/push function?
It seems like it's only doing some classpath computation, which should be possible to do directly with clojure.tools.deps...#2020-10-2209:44cmdrdatsWe've got a datomic on-prem transactor running, saving the data in mysql - we got this, and it died... how would I go about diagnosing the cause and fixing so it doesn't die?#2020-10-2209:45cmdrdatswe're running datomic-transactor-pro-1.0.6165.jar#2020-10-2209:51favilasuper-high-level, the transactor tried to update one of the top-level mutable rows in the database (“pods”) and it couldn’t so it self-destructed. If this happened immediately on txor startup, I would check the connection parameters and credentials are correct and the mysql user has the right permissions (it needs SELECT UPDATE DELETE on table datomic_kvs). Assuming the transactor started correctly and connected to mysql correctly and this happened randomly later, I don’t know. It could be something transient on mysql itself.#2020-10-2209:52favilaI would actually look at mysql logs first#2020-10-2209:59cmdrdatshmm - ok, makes sense - this happened randomly much later#2020-10-2210:00cmdrdatsI do know we've had weird random connection issues on our mysql hosts that we've had to workaround with reconnecting.. so that seems the most likely explanation, thanks for the info!#2020-10-2217:02Michael Stokleyi'm seeing some old materials around datomic that suggest you can query vanilla clojure data structures, such as vectors of vectors. eg:
(d/q '[:find ?first ?height
:in $a $b
:where [$a ?last ?first ?email]
[$b ?email ?height]]
[["Doe" "John" "
this does not run for me. is there a version of this that does work?#2020-10-2217:07favilaThis is on-prem with the peer api; if you are using the client api, you need to include a “real” database as a parameter (first parameter?) even if you don’t use it because that is how the client api finds a machine to send the query to#2020-10-2217:07favilaon-prem runs the query in-process; client (typically-not necessarily) sends the query over the network to another process#2020-10-2217:11Michael Stokleyi see, thank you. it would be terrific to be able to use generic data structures as databases, seems like that would have been the clojure way of doing things as opposed to locking you in to a nominal type#2020-10-2217:11favilathe client api doesn’t provide a query engine, so, they’re kind of at cross purposes#2020-10-2217:13favilato be clear: this works with the client api just fine, but you have to send it to something that can evaluate it#2020-10-2217:14Michael Stokleycan you say more, i'm not sure i follow, yet. this works - this being, using generic data structures as the db?#2020-10-2217:15favila(d/q '[:find ?first ?height
:in $ $a $b
:where [$a ?last ?first ?email]
[$b ?email ?height]]
(d/db client-connection)
[["Doe" "John" "#2020-10-2217:15Michael Stokleyi am using datomic.client.api, it's not on-prem#2020-10-2217:15favilaI’m saying that this should work#2020-10-2217:15favilanote I added a client db, but I didn’t use it in the query#2020-10-2217:16Michael Stokleyoh, interesting.#2020-10-2217:16favilaall of those arguments will be sent to the server (probably not in-process), the query will run, and you will get the result#2020-10-2217:16favilathe server that is backing the db object#2020-10-2217:16Michael Stokleyyeah, that works!#2020-10-2217:24Michael Stokleyi wonder if there's a way to use this for testing? maybe not, since my production query will necessarily be referring to $ (ie not $a or $b)#2020-10-2217:25Michael Stokleyit would be great if i could throw together a very simple database out of generic data structures and exercise my production query on that, instead of a real db#2020-10-2217:32favilaYou could, but real databases normalize entity references to entity ids for you (e.g. it knows :some-attr-keyword is eid 123). Without that you would have to construct your query or data carefully so that the comparisons are exact#2020-10-2217:33favilaalso many query helper functions only work on a real database because they use indexes directly#2020-10-2217:33favila(e.g. d/datoms)#2020-10-2217:33favilaI’m pretty sure get-else would fail, for example#2020-10-2217:37Michael Stokleyperhaps it's more practical to use a real db in tests, then.#2020-10-2219:59Joe LaneHey @michael740 , try the new memdb feature in the latest dev-local! #2020-10-2302:44Michael Stokleythanks @U0CJ19XAM, I'll check it out#2020-10-2217:05motformIs it possible to express a recursive query with the pull api where the data looks like :c/b m..1-> :b/a m..1 -> :a syntax starting from a/gid ? I.e. walking down refs (not components) that point “upward” from the top of hierarchy. I can easily do it from the bottom up, from c/gid, but I guess I just don’t get how to reverse the query. EDIT: Never mind, I just realised that this is what _ is for.#2020-10-2218:22Michael Stokleyit looks like the two argument comparison predicates such as < work perfectly well to compare instants when inside of a datomic query but not in normal clojure. it's confusing because the documentation says that most of the datomic query functions are the same as those found in clojure.core. anyone have any insight?#2020-10-2218:23schmee@michael740 < and some other common predicates are the exception: https://docs.datomic.com/cloud/query/query-data-reference.html#range-predicates#2020-10-2218:26Michael Stokleythe documentation does not indicate that < works with strings, inst, etc.#2020-10-2218:26Michael Stokleyi glad they do, though!#2020-10-2221:12jaretHi All! I wanted to announce the release of the Datomic Knowledgebase: http://ask.datomic.com/#2020-10-2221:22jaretFor anyone wondering... we will be migrating over all of the pendo/receptive requests we've received in the past. So if you don't see something you've requested with me or on our old portal feel free to re-ask or check back again in a week or so.#2020-10-2221:23kennyCurious when this should be used over the forum. #2020-10-2221:25jaretThe big gain from the forums, which is still the place to have discussions about Datomic Applications and to see announcements -- is to have the upvote button for features and harder questions.#2020-10-2221:28jaretOur previous tool Pendo/receptive had several limitations. It just wasn't as accessible as we wanted it to be to get that feedback loop on what features are important to the community.#2020-10-2221:29jaretI'll definitely be cross linking/posting from forum posts going forward if we get to a point where a feature is the best answer for whatever is being discussed.#2020-10-2311:58cmdrdatshi - we're trying to restore a backup of our datomic database (it's a tiny 9mb db, as a test), and it just seems to be hanging forever.. how do we go about figuring out what we're doing wrong? We've set the log level in the logback.xml for datomic to TRACE - and theres a tiny bit of logging (I'll attach that in thread), but nothing else#2020-10-2311:59cmdrdats#2020-10-2314:48jaret@U050CLJ53 Can you share what command you're using to run backup/restore?#2020-10-2314:49jaretIf you'd like feel free to log a case to me by e-mailing <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> and we can dive further.#2020-10-2319:10cmdrdats@U1QJACBUM we're doing this:
/storage/datomic/current/bin/datomic -Xmx1g -Xms1g restore-db "file:/storage/datomic/archive/restore/" "datomic:sql://...?jdbc:..."
if there's nothing we can really self diagnose at a high level, then sure, I'll send a mail on monday 🙂#2020-10-2319:27jaretTry restoring locally using :dev protocol, just to see if it's underlying storage related. But if it couldn't write to storage due to perms I would expect an error not a hang.#2020-10-2314:33millettjonI am trying to get started with datomic in a dev-local setup. Any idea why my dev-local db is not found? I can see can see db.log and log.idx files were created.
(ns flow.db
(:require [datomic.client.api :as d]))
(let [dir (str (System/getProperty "user.dir") "/var/datomic")
db-name "flow"
client (d/client {:server-type :dev-local
:storage-dir dir
:system "dev"})
_ (d/create-database client db-name)
conn (d/connect client {:db-name db-name})]
conn)
;; Unhandled clojure.lang.ExceptionInfo
;; Db not found: flow
;; #:cognitect.anomalies{:category :cognitect.anomalies/not-found,
;; :message "Db not found: flow"}#2020-10-2314:33millettjonI am trying to get started with datomic in a dev-local setup. Any idea why my dev-local db is not found? I can see can see db.log and log.idx files were created.
(ns flow.db
(:require [datomic.client.api :as d]))
(let [dir (str (System/getProperty "user.dir") "/var/datomic")
db-name "flow"
client (d/client {:server-type :dev-local
:storage-dir dir
:system "dev"})
_ (d/create-database client db-name)
conn (d/connect client {:db-name db-name})]
conn)
;; Unhandled clojure.lang.ExceptionInfo
;; Db not found: flow
;; #:cognitect.anomalies{:category :cognitect.anomalies/not-found,
;; :message "Db not found: flow"}#2020-10-2314:48jaretCan you try testing with the absolute path of dir and just execute d/client without the let? Also does the absolute path already have a "dev" system folder?#2020-10-2314:49jaretthen list-dbs on the client#2020-10-2314:50jaret(d/list-databases client {})#2020-10-2314:56millettjonSure.
(def client (d/client {:server-type :dev-local
:storage-dir "/home/jam/src/flow/var/datomic"
:system "dev"}))
(d/create-database client "flow") ; => true
(d/list-databases client {}) ; => []#2020-10-2314:57millettjonfile system looks like this:
$ tree /home/jam/src/flow
/home/jam/src/flow
├── deps.edn
├──
├── src
│ └── flow
│ └── db.clj
└── var
└── datomic
└── dev
├── db.log
└── log.idx#2020-10-2315:00millettjonI created var/datomic.
dev/ gets created by create-database fn#2020-10-2315:17millettjonI tried setting :storage-dir in ~/.datomic/dev-local.edn and it has the same problem. Creates some files but no db found.#2020-10-2315:30jaretWhat version of Dev-local are you using?#2020-10-2315:31jaretCan you share the .datomic/dev-local.edn file in your home directory?#2020-10-2315:32jaretMight be best to just share your deps.edn and I will try to re-create. So I can see version of client and dev-local.#2020-10-2315:37millettjon{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "204a414d60535449434b"}, :content ("[email protected]")}#2020-10-2315:39millettjon{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "f298939fb281869b9199"}, :content ("[email protected]")}#2020-10-2316:02jaret@U071T1PT6 I don't see a version of client in your deps? Do you have datomic client in your .clojure/deps.edn? Could you include com.datomic/client-cloud "0.8.102"#2020-10-2316:07jaretAlso worth testing with the latest dev-local and use 0.9.225#2020-10-2316:38millettjonOk. I was following instructions here: https://docs.datomic.com/cloud/dev-local.html and didn't know about that additional dep. Unfortunately, adding it didn't make any difference. I will try updating to 0.9.225.#2020-10-2317:41jaret@U071T1PT6 I can't reproduce is there anything else I could be missing? Are you starting a new repl to do this? Can you try making a new system name to confirm you are local when a new dir is made? What OS are you using on your system?#2020-10-2319:16ghadilook at the args for create-database#2020-10-2319:16ghadineeds to be {:db-name "flow"} not "flow"#2020-10-2319:17ghadi@U071T1PT6 @U1QJACBUM#2020-10-2319:25jaret!!! Ghadi, great catch!#2020-10-2319:25jaretThat's it#2020-10-2319:26ghadicalling it with a raw string probably shouldn't return true#2020-10-2319:26jaretNo it should not!#2020-10-2319:56millettjonThanks!#2020-10-2404:45onetomthis bit me a few times initially.
it would be very helpful to provide better error message for this situation.#2020-10-2414:05jaretTotally agree this seems to be a bug in using dev-local client and create DB. It should throw an error like in cloud for expected map. I've logged a bug and we'll look at fixing!#2020-10-2414:06jaret;Cloud client
(d/create-database client "testing")
Execution error (ExceptionInfo) at datomic.client.impl.shared/api->client-req (shared.clj:258).
Expected a map
;Dev local client
(d/create-database client "testing")
=> true
#2020-10-2414:12jaretin either case no DB is created, but we should throw an error in both cases!#2020-10-2415:48millettjonThanks for all the help. Working as expected now.😀#2020-10-2317:52Drew VerleeWhat's the best way to report/ask about questions on the official datomic docs? e.g The docs for import-cloud on the docs:
https://docs.datomic.com/cloud/dev-local.html#import-cloud
Don't match the docstring. I assume the docstring is correct as the documentation page lists the same value for source and dest.#2020-10-2320:07jaretI'll fix that!#2020-10-2320:07jaretThanks for catching it.#2020-10-2418:06Drew Verlee@U1QJACBUM thanks for fixing it. Do you know if there is a light weight way to get datomic cloud consulting? I run into little blockers here and there and I would rather shell some money then get stuck for a day off someone can easily help me trouble shoot things.#2020-10-2418:12jaret@U0DJ4T5U1 I am going to tag @U05120CBV on this. You can also e-mail him directly at <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>. He is running point on our Datomic consulting services along with a few other folks at Cognitect. I'll bring this up with him on Monday if you two don't connect here.#2020-10-2317:54dpsuttonThere’s a new ask.datomic site that probably fits the bill #2020-10-2317:58joshkhshould one be alarmed when periodically seeing an Unable to load index root ref <uuid> exception appear in Datomic Cloud logs? (from com.amazonaws.services.dynamodbv2.model.InternalServerErrorException )#2020-10-2320:14jaretDo you often delete DBs? In general, you can see this error if a client or node is asking for a deleted db. The error would also correlate to an outage but these calls are retried so if you don't see this error often or repeatedly it's probably not a major issue.#2020-10-2320:15jaretAs always, it is my support-person duty to recommend upgrading to the latest Cloud release CFT and if you'd like me to look more closely at your system logs I'd be happy to poke around with a Read-Only CloudWatch account. If you want to go down that path, log me a case at <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> and I can take a look.#2020-10-2612:40joshkhthanks, Jaret. we haven't noticed any performance issues related to the exception, so i was mostly just curious. but since you mentioned that it could be related to often deleting DBs, which we very rarely do, then i'll just mention it in some future support ticket. it's a low priority for us. 🙂#2020-10-2322:51Drew VerleeWhen i try to use dev-tools to import a cloud locally it complains that it can't use the endpoint to connect/"name or service is not known". I'm connected and can query the databases though so i'm not sure what the issue is.#2020-10-2412:42Petrus Therondatomic-pro-1.0.6202 throws ActiveMQInternalErrorException when I try to create or connect to a Datomic DB:
clj
Clojure 1.10.1
user=> (require '[datomic.api :as d])
nil
user=> (d/connect "datomic:)
Execution error at datomic.peer/get-connection$fn (peer.clj:661).
Could not find newdb in catalog
user=> (d/create-database "datomic:)
Execution error (ActiveMQInternalErrorException) at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl/sendBlocking (ChannelImpl.java:404).
null
I’ve tried with both Oracle JDK 15 and OpenJDK 15.#2020-10-2413:32jaretI see you are connecting to the DB and then attempting to create the DB? Did this DB already exist or was it the product of a backup/restore? Did you recently upgrade to the new version of Datomic-pro? Or are you saying that this worked before you moved to JDK15? If so, what version were you previously running where this worked? I am going to go test with JDK 15 right now.#2020-10-2413:47Petrus TheronFull story here: https://stackoverflow.com/q/64512606/198927#2020-10-2422:42NassinDatomic doesn't work with jdk15, your safest bet with datomic is java 8#2020-10-2614:45jaretI've re-created the behavior and logged an anomaly for us to investigate further. In general, I am updating our docs to indicate that Datomic on-prem is tested to run against LTS versions of Java (8 and 11). @U051SPP9Z I agree with your assertion elsewhere that we should have a feature to detect when not on an LTS java version and throw a warning to move to one. I am looking at options for such a feature and logging a feature request for further investigation.#2020-10-2618:14NassinFWIW, datomic 1.0.6202 with Java 11 throws some jackson reflection warnings#2020-10-2414:41vnczDoes anybody know if there's a relation between db's T value and the txInstant of an entity?#2020-10-2414:43vnczEssentially I have a database and if I do (:t db) I get 7 as value. On the other hand, if I look for a txInstant for an entity via (def query '[:find ?tx :where [?e :person/id _ ?tx]]) I get very long number instead#2020-10-2414:48vnczWhat I am trying to do is "Given a certain entity ID, what was the t that has introduced/updated it?#2020-10-2414:58vnczFor anybody interested: https://ask.datomic.com/index.php/457/relation-between-t-and-db-txinstant#2020-10-2415:51Lennart BuitForgive me for not answering on ask, but this blog may interest you: https://observablehq.com/@favila/datomic-internals#2020-10-2416:32vnczOh sweet, let's check that out#2020-10-2416:32vnczThanks @UDF11HLKC#2020-10-2416:44vnczI can't find these functions in datomic.client.api 🤔#2020-10-2416:44vnczWhere are them?#2020-10-2418:44vnczThese function seem to be in datomic.api but I can't find it anywhere on maven/clojars#2020-10-2421:03Lennart BuitYou can call datomic.api functions in your queries. Or you can at least on client + peer server#2020-10-2421:19vncz@UDF11HLKC Ah ok so maybe it's only executed on the peer?#2020-10-2421:44vnczI'm a bit confused, I can't find such namespace anywhere and it does not work when doing it in a query (d/q '[:find ?e ?tx ?t :where [?e :person/id _ ?tx] [((t->tx ?tx)) ?t]] db) which makes sense, since it even the docs says that the functions executed must be in the class path.#2020-10-2414:49Alex Miller (Clojure team)This would be a great question to ask on the new forum https://ask.datomic.com#2020-10-2414:49vnczAh ok, I was not aware there was a specific forum#2020-10-2415:21Alex Miller (Clojure team)Just opened this week!#2020-10-2421:49vnczOh ok I found it, it seems like it's in com.datomic/datomic-free#2020-10-2422:00vnczAnd I got it working#2020-10-2422:01vncz#2020-10-2509:28Petrus TheronHey guys, I’ve been blocked for two days trying to get Datomic to talk to any non-memory storage on my machine. Any leads on why Datomic works fine for in-memory DB, but can’t connect to my local dev transactor?
➜ datomic-debug clj
Clojure 1.10.1
user=> (require '[datomic.api :as d])
nil
user=> (d/create-database "datomic:)
true ;; in-mem works
user=> (d/create-database "datomic:)
Oct 25, 2020 9:26:42 AM org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector createConnection
ERROR: AMQ214016: Failed to create netty connection
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source)
Execution error (ActiveMQNotConnectedException) at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl/createSessionFactory (ServerLocatorImpl.java:787).
AMQ119007: Cannot connect to server(s). Tried with all available servers.
I suspect an incompatibility with my JDK and Datomic’s queuing dependency, but having tried different versions of Clojure, Datomic (Pro and Free), Netty, HornetMQ and different JDKs, I can’t figure out why I can’t connect to or create a DB with :dev storage. What am I doing wrong?#2020-10-2509:40Petrus TheronOMG. Datomic transactor requires Java 8. Fixed by switching the transactor env to Java 1.8. https://forum.datomic.com/t/java-11-0-1-ssl-exception/734
Maybe the transactor can try to connect to itself on startup and complain if Java version is wrong?
(Thanks for the tip, @U011VD1RDQT.)
Depending on which client version of Datomic you’re running, you’ll get different error messages ranging from ActiveMQ, to SSL handshakes, to Netty errors.#2020-10-2515:45Dustin GetzI recall this being fixed in recent versions of Datomic, i could be wrong#2020-10-2608:55Petrus TheronHappens when running Datomic Pro 1.0.6202#2020-10-2612:59joshkhi have an HTTP Direct project setup behind an API Gateway, with a VPC Link resource/method configured to Use Proxy Integration. everything works fine. the proxy method request is configured to use AWS_IAM authorization which also works as expected.
when i inspect a request that makes it through the gateway and to my project, i see all of the keys listed in the Web Ion table [1] except for two: datomic.ion.edn.api-gateway/json and datomic.ion.edn.api-gateway/data
presumably these keys have the values i need to identify the requester's cognito identity, know about the gateway details etc. are they available when using HTTP Direct integration?
[1] https://docs.datomic.com/cloud/ions/ions-reference.html#web-ion#2020-10-2711:20joshkhper yesterday's discussion, i've moved this to the forums https://forum.datomic.com/t/where-can-i-find-cognito-or-iam-details-from-api-gateway-when-using-http-direct/1675#2020-10-2616:47Drew VerleeDouble checking here that the new forum is the ideal way to ask questions of this type: https://ask.datomic.com/index.php/476/how-to-use-import-cloud-to-import-cloud-data-locally#2020-10-2617:12jaretIdeal place! I think I added a potential answer to your question. I believe you're missing :proxy-port which you need when going through a proxy (i.e. client access)#2020-10-2617:17Drew VerleeAdding proxy-port moves me forward. I must have tried to add it before when my connection was down.#2020-10-2617:33joshkh^ piggybacking on that question, is that also the ideal place for my question? i'm never quite sure where to post: slack, datomic forums, and i only just learned about http://ask.datomic.com#2020-10-2617:36joshkhpublic archives are ideal over slack's limited history. i'm just not sure which of the ones i listed get the most attention (sometimes feels like Slack to me)#2020-10-2618:18jaretin my dream world we would all feel the compulsion to cross post to ask/forums all of the great answers that get worked out here quickly in slack. 🙂#2020-10-2618:26joshkhagreed! and i'm happy to do that. but i wasn't sure of the level of tolerance for already answered questions getting posted to the forum... 'suppose Ask is a good place for that 🙂#2020-10-2618:27jaretMy level of tolerance is infinite. We lose so much to slack archive 🙂#2020-10-2618:32joshkhspeaking of the forums, sometimes i find unanswered posts (including my own*) and wonder if we're opening the wrong kind of discussions to garner responses * https://forum.datomic.com/t/enumerated-values-in-tuples-are-only-eids/1644.#2020-10-2618:38joshkhit makes me wonder if no response (here in Slack or on the forums) means the question or topic is nonsense, with my full understanding that i'm noisy and ask some dumb questions from time to time 🙂#2020-10-2619:24jarethaha! No they aren't dumb questions. I just overlook some questions or need to check with the team for a better personal understanding. I look at this tomorrow and ask the team if I can't reason through it. Sorry for not responding on this post!#2020-10-2619:26jaretAnd just so I am clear, are you asking why ref's in tuples are EIDs? Trying to discern if you need a feature request or are questioning if this is useful/intended?#2020-10-2620:15joshkhwell, to me, pull is acting differently for references to idents than it is to references to the same idents within tuples, and the impact is on unification. when i pull a typical reference to an ident, i get back {:db/id 999 :db/ident :some/enumerated-value} which is perfect because that value doesn't have to exist or unify.. it's a pull, and i can return that value as-is. this entity might have some enumerated value, or not. but when i pull a reference to an ident within a tuple, i get back just the EID 999. then, to resolve its :db/ident, i have to unify in the constraints [?maybe-some-enumerated-value :db/ident ?ident which excludes any entities in the query that do not have a reference in a tuple to an enumerated value.#2020-10-2620:16joshkhso i'm wondering if that's by design, mostly because i ran into a use case where moving a reference to an ident into a tuple broke some logic based on pull 's usual behaviour of returning :db/idents#2020-10-2617:38marshalldatomic forums and/or ask.datomic are preferred#2020-10-2718:55Nate IsleyI created a new datomic cloud solo stack deployment today in AWS in support of exploring the https://github.com/Datomic/ion-starter project. After stepping through the starter connection steps, there were 2 CloudWatch alarms:
ConsumedReadCapacityUnits < 750 for 15 datapoints within 15 minutes
ConsumedWriteCapacityUnits < 150 for 15 datapoints within 15 minutes
A third CloudWatch alarm showed up sometime after I tried to deploy ions:
JvmFreeMb < 1 for 2 datapoints within 2 minutes#2020-10-2719:02jaret@UF9AED8CC No, likely unrelated. Those are the Dynamo DB scaling alarms. If you have been unable to deploy it's unlikely that you're getting far enough for DDB to be a factor as those alarms will fire when you exceed read and write capacity on a Dynamo DB. To diagnose your failure, you'll want to look in a few places:
1. The Code Deploy console (you can drill into the failure and find out which step it failed on)
2. CloudWatch logs for the ion deploy exception. You can find these logs by searching your CloudWatch console for datomic-<systemname>. Shown in our docs https://docs.datomic.com/cloud/troubleshooting.html#troubleshooting-ions#2020-10-2719:15Nate IsleyOk, thank you. I saw a CodeDeploy message about health of nodes, which made me wonder if the Alarms were preventing the deploy.#2020-10-2718:56Nate IsleyCould these alarms be the cause of the ion deploy failing?#2020-10-2720:33localshredHi friends, I'm working on a retry+backoff for the :cognitect.anomalies/unavailable "Loading Database" exception that we get after ion restarts. If this is a common issue like the troubleshooting docs suggest, I'm wondering how others are handling this. My current approach when getting the connection with d/connect is to try/catch when performing a simple check if the connection is active (something like (d/db-stats (d/db conn))). I've also considered doing a d/q query for some specific datom that I know is present. Any thoughts or other ideas?#2020-10-2720:35Alex Miller (Clojure team)you might want to ask on https://ask.datomic.com#2020-10-2720:35localshredOk, thanks @alexmiller, here's the question on the forum https://ask.datomic.com/index.php/486/approach-connection-cognitect-anomalies-unavailable-exceptions#2020-10-2722:45vnczIs there anywhere specified the logic that Datomic uses to get the data from the storage server to the peer (whether it's in process or a separate server)?#2020-10-2723:13favilaThe peer reads storage directly for whatever it needs#2020-10-2723:42vncz@U09R86PA4 I heard around in some videos that the Transactor pushes the updates?#2020-10-2723:51favilaIt broadcasts just-completed txs to already connected peers, but not index data#2020-10-2800:09vnczUnderstood. I must have understood incorrectly then. I recall a video saying something different#2020-10-2814:21vnczIs there a way to DRY these two queries? They look almost identical apart from the id parameter.
Is there a way without having to manipulate the list manually?#2020-10-2814:23vnczWell I can probably get away with it by using the map form, but I was wondering whether there's a better way 🤔#2020-10-2814:55favilaI question whether this is better, but it is DRY:#2020-10-2814:56favila(defn people [db person-pull person-ids]
(d/q '[:find (pull ?e person-pull)
:in $ person-pull ?ids
:where
(or-join [?e ?ids]
(and [(= ?ids *)]
[?e :person/id])
(and [(!= ?ids *)]
[(identity ?ids) [?id ...]]
[?e :person/id ?id]))]
db
person-pull
person-ids))#2020-10-2814:56favilause the special id value * to mean “everyone”#2020-10-2814:56favilaotherwise it’s a vector of entity identifiers#2020-10-2815:08vnczHmm does not seem worth the hassle. I was thinking of using the map form and manually inject the parameter @U09R86PA4 …what do you think?#2020-10-2815:10favilathere may be a penalty from not caching the query plan, since each query is a new instance. But I’m not sure if cache lookup is by identity or by value#2020-10-2816:11vnczGot it. good point. Seems like a case where duplication is cheaper than the wrong abstraction 🙂#2020-10-2814:24ghadiUse d/pull directly with the first form#2020-10-2814:27vnczCan I? I do not really have the entity id, I have my "internal" id#2020-10-2814:49favilapull (and most things) can take any entity identifier, which includes entity ids, idents, and lookup refs. In your case, (d/pull db [:person/id :person/name :person/surname] [:person/id "the-id"]) would work#2020-10-2815:07vnczAh interesting, that I didn't know. Thanks!#2020-10-2814:24ghadiRather than pull in the find spec of a query#2020-10-2814:24vnczOk fair#2020-10-2814:29vnczI was thinking that if I would be switching to map form I could manipulate the query easily and add the parameter?#2020-10-2815:43Aleh AtsmanHello, can somebody clarify for me the purpose of cast/event , is it only for infrastructure level events or can it be used for application level events as well?#2020-10-2817:10joshkhsomeone else can correct me if i'm wrong, but i use cast/event for all sorts of things including application level logging#2020-10-2818:16jaret@U4N27TADS to echo what Josh is saying it's for ordinary occurrences of interest to the operator. Whereas an Alert is for an extraordinary occurrence that requires operator intervention. These are conventions you can choose to follow or not in your use of the ion.cast.#2020-10-2914:08Aleh AtsmanHello @U0GC1C09L, @U1QJACBUM! Thank you for explanation. In the end we decided to go with event bridge or sns topic directly.
It is problematic to route events from cloudwatch logs to aws lambda functions. As the only option there is subscription filter (max 2 per log group).
Maybe I missing something, but haven't found solution where I am able to get events submitted using cast/event to lambda functions.#2020-10-2914:20joshkhi don't know if this is useful to you, but we cast/alert exceptions to CloudWatch, and then use a CLJS Lambda to send them to Slack to torture our developers. cast/events are not really any different, except for maybe the frequency at which they appear, but i think log streams are batched. (sorry for the lack of a proper README, it was a hobby project)
https://github.com/joshkh/datomic-alerts-to-slack
> It is problematic to route events from cloudwatch logs to aws lambda functions. As the only option there is subscription filter (max 2 per log group).
for application level logs and alerts, we tend to the use a common message throughout the application (e.g. {:msg "<app-name>-application-log"} ) , and then we attach other keys such as :env and :query-group . this provides us different levels of filtering while keeping our logs all in one place#2020-10-2914:22joshkhbut if SNS works for you then go for it! 🙂 i just prefer CloudWatch because cast serialises Clojure to JSON very well, and being able to add arbitrary keys at any level is useful for filtering#2020-10-2912:35Matheus Moreirahello! today i notice a weird interaction between datomic client api and djblue/portal (https://github.com/djblue/portal): when this last one is not on the classpath, then i can obtain a connection to my (local, datomic pro) database; when portal is on the classpath, connecting to the database fails with the following error:
Exception in thread "async-dispatch-1" java.lang.RuntimeException: java.lang.NoClassDefFoundError: org/msgpack/MessagePack
at com.cognitect.transit.TransitFactory.writer(TransitFactory.java:104)
at cognitect.transit$writer.invokeStatic(transit.clj:161)
at cognitect.transit$writer.invoke(transit.clj:139)
at $marshal.invokeStatic(io.clj:48)
at $marshal.invoke(io.clj:38)
at $client_req__GT_http_req.invokeStatic(io.clj:76)
at $client_req__GT_http_req.invoke(io.clj:73)
at datomic.client.impl.shared.Client._async_op(shared.clj:398)
at datomic.client.impl.shared.Client$fn__34578$state_machine__5717__auto____34593$fn__34595.invoke(shared.clj:423)
at datomic.client.impl.shared.Client$fn__34578$state_machine__5717__auto____34593.invoke(shared.clj:422)
at clojure.core.async.impl.ioc_macros$run_state_machine.invokeStatic(ioc_macros.clj:973)
at clojure.core.async.impl.ioc_macros$run_state_machine.invoke(ioc_macros.clj:972)
at clojure.core.async.impl.ioc_macros$run_state_machine_wrapped.invokeStatic(ioc_macros.clj:977)
at clojure.core.async.impl.ioc_macros$run_state_machine_wrapped.invoke(ioc_macros.clj:975)
at datomic.client.impl.shared.Client$fn__34578.invoke(shared.clj:422)
at clojure.lang.AFn.run(AFn.java:22)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at clojure.core.async.impl.concurrent$counted_thread_factory$reify__469$fn__470.invoke(concurrent.clj:29)
at clojure.lang.AFn.run(AFn.java:22)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.NoClassDefFoundError: org/msgpack/MessagePack
at com.cognitect.transit.impl.WriterFactory.getMsgpackInstance(WriterFactory.java:77)
at com.cognitect.transit.TransitFactory.writer(TransitFactory.java:95)
... 20 more
Caused by: java.lang.ClassNotFoundException: org.msgpack.MessagePack
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 22 more
#2020-10-2912:35Matheus Moreirahello! today i notice a weird interaction between datomic client api and djblue/portal (https://github.com/djblue/portal): when this last one is not on the classpath, then i can obtain a connection to my (local, datomic pro) database; when portal is on the classpath, connecting to the database fails with the following error:
Exception in thread "async-dispatch-1" java.lang.RuntimeException: java.lang.NoClassDefFoundError: org/msgpack/MessagePack
at com.cognitect.transit.TransitFactory.writer(TransitFactory.java:104)
at cognitect.transit$writer.invokeStatic(transit.clj:161)
at cognitect.transit$writer.invoke(transit.clj:139)
at $marshal.invokeStatic(io.clj:48)
at $marshal.invoke(io.clj:38)
at $client_req__GT_http_req.invokeStatic(io.clj:76)
at $client_req__GT_http_req.invoke(io.clj:73)
at datomic.client.impl.shared.Client._async_op(shared.clj:398)
at datomic.client.impl.shared.Client$fn__34578$state_machine__5717__auto____34593$fn__34595.invoke(shared.clj:423)
at datomic.client.impl.shared.Client$fn__34578$state_machine__5717__auto____34593.invoke(shared.clj:422)
at clojure.core.async.impl.ioc_macros$run_state_machine.invokeStatic(ioc_macros.clj:973)
at clojure.core.async.impl.ioc_macros$run_state_machine.invoke(ioc_macros.clj:972)
at clojure.core.async.impl.ioc_macros$run_state_machine_wrapped.invokeStatic(ioc_macros.clj:977)
at clojure.core.async.impl.ioc_macros$run_state_machine_wrapped.invoke(ioc_macros.clj:975)
at datomic.client.impl.shared.Client$fn__34578.invoke(shared.clj:422)
at clojure.lang.AFn.run(AFn.java:22)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at clojure.core.async.impl.concurrent$counted_thread_factory$reify__469$fn__470.invoke(concurrent.clj:29)
at clojure.lang.AFn.run(AFn.java:22)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.NoClassDefFoundError: org/msgpack/MessagePack
at com.cognitect.transit.impl.WriterFactory.getMsgpackInstance(WriterFactory.java:77)
at com.cognitect.transit.TransitFactory.writer(TransitFactory.java:95)
... 20 more
Caused by: java.lang.ClassNotFoundException: org.msgpack.MessagePack
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 22 more
#2020-10-2912:37Matheus Moreirai noticed that portal has com.cognitect/- transit-cljs, transit-js, transit-clj, and transit-java as dependencies. why would these dependencies interfere with datomic obtaining a connection?#2020-10-2912:38Matheus Moreira(there are other dependencies but i don’t believe they have anything to do with the case: cheshire 5.10.0 and http-kit 2.5.0).#2020-10-2912:42Alex Miller (Clojure team)a) what are you using to make the classpath
b) if clj, what version? (prob best to upgrade to latest if you haven’t)
c) please provide the set of project deps that repro this#2020-10-2912:53Matheus Moreirai am using clojure tools (clojure 1.10.1) and i open a repl using cider-jack-in-clj. then i start my system via integrant.repl.#2020-10-2912:54Matheus Moreirathis is my deps.edn. when connecting to the repl, i add -A:dev to the clj command.#2020-10-2913:04favilaPortal requires transit but excludes msgpack#2020-10-2913:06Matheus Moreiraand datomic requires msgpack?#2020-10-2913:07Matheus Moreiraif this is the case, is it a classpath resolution conflict/problem, i.e. msgpack should be in the final classpath because datomic requires it, even if portal excludes it?#2020-10-2913:26Alex Miller (Clojure team)what version of the clojure tools? clj -Sdescribe#2020-10-2913:27Alex Miller (Clojure team)there were issues with this kind of scenario that were fixed a few versions ago#2020-10-2913:27Alex Miller (Clojure team)release info here https://clojure.org/releases/tools - latest is 1.10.1.727#2020-10-2913:33favilaThe client apparently requires msgpack but doesn’t depend on it directly, expecting it via transit#2020-10-2913:33favilaYes#2020-10-2913:33favilaPortal uses transit but not the msgpack encoding option, so it doesn’t want to bring it in#2020-10-2913:33favilaDatomic client uses transit with msgpack encoding but doesn’t require it directly#2020-10-2913:34favilaI wonder what maven would compute in this case#2020-10-2913:38Alex Miller (Clojure team)I'm stepping away till this afternoon but I'd be happy to look at this in depth when I get back. My recommendation would be to move to latest clj if you haven't already as there have been fixes in this area.#2020-10-2914:14Matheus Moreirathanks, @U064X3EF3 and @U09R86PA4. i’ll update clj tools if mine is not up-to-date.#2020-10-2914:17Matheus Moreirahttps://clojurians.slack.com/archives/C03RZMDSH/p1603978063215500?thread_ts=1603974900.204700&cid=C03RZMDSH
mine was 1.10.1.716#2020-10-2914:20favila(Sorry if I’m confusing, slack must have barfed on my messages because they’re all out of order and 20 min late)#2020-10-2914:31Matheus Moreira@U064X3EF3 fyi i updated clj tools and the error still happens.#2020-10-2918:57Alex Miller (Clojure team)I do actually see msgpack on the classpath with these deps. Can you run clj -Sforce -Stree -A:dev and grep the output for msgpack? force will force recomputing the classpath - it's possible you are seeing an older cached classpath#2020-10-2918:59Alex Miller (Clojure team)also if you're still seeing it after that, do clj -Strace -A:dev and attach the trace.edn file it emits here#2020-11-0311:39Matheus Moreiraclj -Sforce -Stree -A:dev returns nothing. djblue/portal was commented out in deps.edn, maybe that is why you see it in your output.#2020-11-0311:40Matheus Moreira@U064X3EF3 sorry for the delay in my reply…#2020-10-2919:04joshkhwhen untruthifying™ a boolean value of a schema attribute, is there an advantage to choosing one method over the other?
[:db/retract :some/ident :db/isComponent true]
vs
[:db/add :some/ident :db/isComponent false]
just curious#2020-10-2919:46Lennart Buitfwiw, they are not equivalent, right? The first removes the fact about being a component altogether, and the second asserts it as false#2020-10-2921:59joshkhyup. in this case the resulting behaviours are the same (:some/ident is no longer a component), but the resulting annotations of the ident are different (false vs missing). i'm just wondering why someone might choose one over the other.#2020-10-2920:09Michael Stokleydo folks compose pull patterns? suppose i have an entity and for one use case, i need a set of attrs; for another, i need a non-overlapping different set of attrs. since it's all data, maybe they can be defined separately but then composed so i can make one db call instead of n#2020-11-0103:21steveb8n@michael740 I do this a lot. I use the apply-template fn in clojure core to inject pull vectors into other pull vectors or queries. it’s simple and it works well.#2020-11-0103:21steveb8neven better if you match a transform fn for the results to each pull expr. then you can compose them to process the results as well.#2020-11-0223:47Michael Stokleydo you mean apply-template in clojure.template?#2020-11-0321:57steveb8nsorry, yes, that’s the one. in my case, I generate the pull expressions and post query transform fns from a domain model. something like https://github.com/stevebuik/clj-code-gen-hodur#2020-10-2920:10Michael Stokleyi probably want to use sets instead of vectors in the initial representation? merge those, then swap the sets for vectors before use with datomic#2020-10-2920:12Michael Stokleyhandling the vector syntax vs the map syntax might be tricky#2020-10-2920:14kenny@michael740 We use the https://github.com/edn-query-language/eql for this. #2020-10-2920:15Michael Stokleyhttps://github.com/edn-query-language/eql#unions ?#2020-10-2920:17kennyWe go pull pattern -> ast -> merge -> pull pattern. #2020-10-2920:17kennyThe merge is typically very simple since it’s in the ast format. #2020-10-2920:19Michael Stokleythanks!#2020-10-2920:19Michael Stokleythis looks cool as all get out#2020-10-2920:20kennyWe also use pathom so this is a very natural lib for us to use. You could probably write a smaller version to just do what you need if you don’t want to bring on a new dep. #2020-10-3016:01kennyI am running dev-local/import-cloud (in parallel for unique db-names. May be relevant, not sure) and received this exception. Any idea what would cause this?#2020-10-3016:07kennyThis definitely has something to do with being parallel. It will consistently fail when ran in parallel.#2020-10-3016:09kenny@U1QJACBUM I'm not sure that your answer https://ask.datomic.com/index.php/493/is-dev-local-import-cloud-thread-safe to true 🙂#2020-10-3017:25jarethuh. Ok I will run this down @U083D6HK9 . Could you copy your comment over to the ask question so we don't lose this to slack archiving?#2020-10-3017:25jaretI am happy to copy it, but would make more sense if you replied to me there#2020-10-3017:27kennySure. I was hoping to provide a bit more insight than this exception though 🙂 Just wasn't sure what would be helpful for you all. I may be able to create a repro.#2020-10-3017:29jaretIt's helpful! and if you can make a repro that'd be better.#2020-10-3017:30kennyOk. Will follow up in a bit.#2020-10-3017:30jaret@U083D6HK9 also it might be relevant to know if these DBs are special (i.e. large or something)#2020-10-3017:49kennyWhat is large?#2020-10-3021:57kennyI've also noticed that import-cloud can hang forever. I left an import running for the past 4 hours and the normal red "Loading ..." didn't even appear.#2020-10-3022:02kennyThere exception thrown is also very inconsistent. Will try to include everything.#2020-10-3022:10kennyComment blocks don't have rich formatting 😢#2020-11-0315:18kennyfyi, you can delete https://ask.datomic.com/index.php/493/is-dev-local-import-cloud-thread-safe?show=498#c498. I didn't see a way to on my end.#2020-11-0317:50jaretKenny, I think I deleted teh right one for you#2020-11-0317:50jaretI'll look at turning on the ability for a user to delete their own post#2020-10-3110:01Jakub Holý (HolyJak)I had no idea Datomic was a place in Finland. Though perhaps nothing about :flag-fi: should surprise me :rolling_on_the_floor_laughing:#2020-11-0102:55yubrshenWhat's the meaning of the following error message, and how can I investigate and fix it?
:db.error/lookup-ref-attr-not-unique Attribute values not unique: :user/email
Here is the source code that will recreate the error:
(ns grok.db.add-user
(:require [grok.db.core :as SUT]
[grok.db.schema :refer [schema]]
[datomic.api :as d]))
(def sample-user
{:user/id (d/squuid)
:user/email "
The code will create a user in the mem database, and retrieve it by its email address (for the purpose of having a user for tests)
Here is the related schema code for user:
[
;; ## User
;; - id (uuid)
;; - full-name (string)
;; - username (string)
;; - email (string => unique)
;; - password (string => hashed)
;; - token (string)
{:db/ident :user/id
:db/valueType :db.type/uuid
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity
:db/doc "ID of the User"}
{:db/ident :user/email
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/doc "Email of the User"}
{:db/ident :user/password
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/doc "Hashed Password of the User"}
]
Here is the code of create-conn:
(defn create-conn [db-uri]
(when db-uri
(d/create-database db-uri)
(let [conn (d/connect db-uri)]
conn)))
Here is the full error trace:
2. Unhandled clojure.lang.Compiler$CompilerException
Error compiling test/grok/db/add_user.clj at (21:1)
#:clojure.error{:phase :compile-syntax-check,
:line 21,
:column 1,
:source
"/home/yshen/programming/clojure/learn-immutable-stack-with-live-coding-ankie/grok/server/test/grok/db/add_user.clj"}
Compiler.java: 7648 clojure.lang.Compiler/load
REPL: 1 user/eval19658
REPL: 1 user/eval19658
Compiler.java: 7177 clojure.lang.Compiler/eval
Compiler.java: 7132 clojure.lang.Compiler/eval
core.clj: 3214 clojure.core/eval
core.clj: 3210 clojure.core/eval
interruptible_eval.clj: 87 nrepl.middleware.interruptible-eval/evaluate/fn/fn
AFn.java: 152 clojure.lang.AFn/applyToHelper
AFn.java: 144 clojure.lang.AFn/applyTo
core.clj: 665 clojure.core/apply
core.clj: 1973 clojure.core/with-bindings*
core.clj: 1973 clojure.core/with-bindings*
RestFn.java: 425 clojure.lang.RestFn/invoke
interruptible_eval.clj: 87 nrepl.middleware.interruptible-eval/evaluate/fn
main.clj: 437 clojure.main/repl/read-eval-print/fn
main.clj: 437 clojure.main/repl/read-eval-print
main.clj: 458 clojure.main/repl/fn
main.clj: 458 clojure.main/repl
main.clj: 368 clojure.main/repl
RestFn.java: 137 clojure.lang.RestFn/applyTo
core.clj: 665 clojure.core/apply
core.clj: 660 clojure.core/apply
regrow.clj: 20 refactor-nrepl.ns.slam.hound.regrow/wrap-clojure-repl/fn
RestFn.java: 1523 clojure.lang.RestFn/invoke
interruptible_eval.clj: 84 nrepl.middleware.interruptible-eval/evaluate
interruptible_eval.clj: 56 nrepl.middleware.interruptible-eval/evaluate
interruptible_eval.clj: 152 nrepl.middleware.interruptible-eval/interruptible-eval/fn/fn
AFn.java: 22 clojure.lang.AFn/run
session.clj: 202 nrepl.middleware.session/session-exec/main-loop/fn
session.clj: 201 nrepl.middleware.session/session-exec/main-loop
AFn.java: 22 clojure.lang.AFn/run
Thread.java: 834 java.lang.Thread/run
1. Caused by datomic.impl.Exceptions$IllegalArgumentExceptionInfo
:db.error/lookup-ref-attr-not-unique Attribute values not unique: :user/email
{:cognitect.anomalies/category :cognitect.anomalies/incorrect,
:cognitect.anomalies/message "Attribute values not unique: :user/email",
:db/error :db.error/lookup-ref-attr-not-unique}
error.clj: 79 datomic.error/arg
error.clj: 74 datomic.error/arg
error.clj: 77 datomic.error/arg
error.clj: 74 datomic.error/arg
db.clj: 590 datomic.db/resolve-lookup-ref
db.clj: 569 datomic.db/resolve-lookup-ref
db.clj: 610 datomic.db/extended-resolve-id
db.clj: 606 datomic.db/extended-resolve-id
db.clj: 621 datomic.db/resolve-id
db.clj: 614 datomic.db/resolve-id
db.clj: 2295 datomic.db.Db/entity
api.clj: 171 datomic.api/entity
api.clj: 169 datomic.api/entity
add_user.clj: 21 grok.db.add-user/eval19672
add_user.clj: 21 grok.db.add-user/eval19672
Compiler.java: 7177 clojure.lang.Compiler/eval
Compiler.java: 7636 clojure.lang.Compiler/load
REPL: 1 user/eval19658
REPL: 1 user/eval19658
Compiler.java: 7177 clojure.lang.Compiler/eval
Compiler.java: 7132 clojure.lang.Compiler/eval
core.clj: 3214 clojure.core/eval
core.clj: 3210 clojure.core/eval
interruptible_eval.clj: 87 nrepl.middleware.interruptible-eval/evaluate/fn/fn
AFn.java: 152 clojure.lang.AFn/applyToHelper
AFn.java: 144 clojure.lang.AFn/applyTo
core.clj: 665 clojure.core/apply
core.clj: 1973 clojure.core/with-bindings*
core.clj: 1973 clojure.core/with-bindings*
RestFn.java: 425 clojure.lang.RestFn/invoke
interruptible_eval.clj: 87 nrepl.middleware.interruptible-eval/evaluate/fn
main.clj: 437 clojure.main/repl/read-eval-print/fn
main.clj: 437 clojure.main/repl/read-eval-print
main.clj: 458 clojure.main/repl/fn
main.clj: 458 clojure.main/repl
main.clj: 368 clojure.main/repl
RestFn.java: 137 clojure.lang.RestFn/applyTo
core.clj: 665 clojure.core/apply
core.clj: 660 clojure.core/apply
regrow.clj: 20 refactor-nrepl.ns.slam.hound.regrow/wrap-clojure-repl/fn
RestFn.java: 1523 clojure.lang.RestFn/invoke
interruptible_eval.clj: 84 nrepl.middleware.interruptible-eval/evaluate
interruptible_eval.clj: 56 nrepl.middleware.interruptible-eval/evaluate
interruptible_eval.clj: 152 nrepl.middleware.interruptible-eval/interruptible-eval/fn/fn
AFn.java: 22 clojure.lang.AFn/run
session.clj: 202 nrepl.middleware.session/session-exec/main-loop/fn
session.clj: 201 nrepl.middleware.session/session-exec/main-loop
AFn.java: 22 clojure.lang.AFn/run
Thread.java: 834 java.lang.Thread/run
I ran into this problem when I followed the coding example at https://www.youtube.com/watch?v=Fz6LxSSc_GE at 6:06 (The Immutable Stack - Building Anki Clone using Clojure, Datomic and ClojureScript (Part 5)). In the video, there was no error, bui I did, and could reproduce it as above.#2020-11-0103:26favilaThe schema for :user/email does not include a :db/unique constraint, therefore you cannot use this attribute for lookups as you do in your d/entity call#2020-11-0103:27favilaThus “lookup ref attr not unique” in the error message#2020-11-0103:33yubrshen@U09R86PA4 Thanks for the quick and effective help!#2020-11-0121:49Jakub Holý (HolyJak)I guess "lookup ref attr not declared as :db/unique" would be a more helpful error message.#2020-11-0202:10yubrshenIt's strange that once I added :db/unique to the existing schema without transacting, but just evaluate the schema definition, it works for retrieving the user by email address.
But later, when I actually transacted the updated schema, then I run into trouble of "Error: {:db/error :db.error/unique-without-index, :attribute :user/email}"#2020-11-0118:53yubrshenHow do I fix this error?
"Error: {:db/error :db.error/unique-without-index, :attribute :user/email}"
The error happened when I added more entities to the schema and did (d/transact conn schema) again to update the schema.
This is a consequence of my prior error referred in https://clojurians.slack.com/archives/C03RZMDSH/p1604199338251700 where I had to change my schema to add :db/unique constraint to :user/email
I wouldn't mind to start from scratch with a new database. But I have not learned how to do that yet.
But if it were in a production system, when I modify my schema, what's the proper way to correct and update?#2020-11-0121:53Jakub Holý (HolyJak)I wish there was a catalog of datomic errors with explanations and guidance. Searching the net for "datomic unique-wirhout-index&t" yields nothing :-(#2020-11-0121:56favilaYou need a value index before you can make a value unique. See the docs on schema changes, it has a table of all legal transitions#2020-11-0202:02yubrshenMy problems are that I don't know enough Datomic to figure out how to have "a value index". I'm studying the documentation on schema change at https://docs.datomic.com/cloud/schema/schema-change.html
There are two pre-conditions:In order to add a uniqueness constraint to an attribute, both of the following must be true:
> The attribute must have a cardinality of `:db.cardinality/one`.
> If there are values present for that attribute, they must be unique in the set of current database assertions.
For the first one, my schema for :user/email already has :db.cardinality/one.
For the second one, I don't know how to handle:
1. how to check if the values present for :user/email are unique or not
2. If not, how to fix them.#2020-11-0212:18favilaAre you using cloud or on prem? You cite cloud docs but this sounds like an on-prem problem. (Cloud indexs all values by default)#2020-11-0212:20favilaOn on-prem, there is a :db/index true#2020-11-0215:09yubrshenI'm using on-prem, actually just dev one. I'll take look of :db/index tree @U09R86PA4 Thanks for the pointer!#2020-11-0215:10favilahttps://docs.datomic.com/on-prem/schema.html#altering-schema-attributes#2020-11-0215:10favila> All alterations happen synchronously, except for adding an AVET index. If you want to know when the AVET index is available, call https://docs.datomic.com/on-prem/javadoc/datomic/Connection.html#syncSchema(long). In order to add :db/unique, you must first have an AVET index including that attribute.#2020-11-0215:11favila(quote from the docs)#2020-11-0215:25yubrshen@U09R86PA4 Yes, the following worked:
(def tx-add-index @(d/transact conn [{:db/id :user/email
:db/index true}]))
(def tx-fix @(d/transact conn [{:db/id :user/email
:db/unique :db.unique/identity}]))
where conn is a connection to the on-prem (dev) database.
Thanks again for your coaching!#2020-11-0210:11lambdamHello,
I'm discovering Datomic entity specs.
I tried to trigger an spec error and here is the message:
"Entity temp-id missing attributes
The doc example gives:
"Entity 42 missing attributes [:user/email] of by :user/validate"}
Clearly, the serialization of the missing attribute seems to go wrong.
I'm using the latest version of Datomic (`1.0.6202` ).
Is it a known problem?#2020-11-0217:27marshallcan you share your :admin/validate spec ?#2020-11-0316:37lambdamHere is the spec:
{:db/ident :admin/validate
:db.entity/attrs [:admin/email :admin/hashed_password]}
and here are the attribute declarations:
{:db/ident :admin/email
:db/valueType :db.type/string
:db/unique :db.unique/identity
:db/cardinality :db.cardinality/one
:db.attr/preds myproject.entities.entity/email?}
{:db/ident :admin/hashed_password
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
The only "particular" thing that I see is that the email field has an attibute predicate.
Thanks#2020-11-0317:15jaretHey @U94V75LDV I've made a ticket to look at this more closely. I'll keep you updated on what I find could you DM me an e-mail so I can contact you in the event that this slack convo gets archived while I am poking around?#2020-11-0317:26lambdamThank you very much!
I do it right now.#2020-11-0210:34lambdamAlso I noted those points that seem weird:
1 - The documentation says: :db/ensure When I then pull all the attributes of the entity, the :db/ensure field appears.
{:db/id 17592186045425,
:db/ensure [#:db{:id 17592186045420}],
:user/hashed-password "...",
...}
I then don't get what is a "virtual attribute" then.
2 - After transacting successfully an entity with its spec "activated", I could then retract a field without triggering the entity spec:
(datomic.api/transact
conn*
[[:db/retract 17592186045425 :user/hashed_password]])
The resulting entity viiolates the spec but nothing was triggered. Is it the desired behaviour of entity specs?
Thanks#2020-11-0214:14bhurlowdoes Datomic use auto-discovery when integrating with the managed memcached in AWS? Or do we need to pass in each relevant memcached node individually?#2020-11-0312:46joshkhshould i be able to upsert an entity via an attribute which is a reference and also unique-by-identity?#2020-11-0313:06favilayes#2020-11-0312:50joshkhfor example
(d/transact conn
{:tx-data
[; upsert a school entity where :school/president is a reference and unique-by-identity
{:school/president {:president/id 12345}
:school/name "Bowling Academy of the Sciences"}]})
#2020-11-0313:07favilaThis is actually two upserts isn’t it? :president/id also?#2020-11-0315:26joshkhyes, you are correct and that is indeed the problem. it seems that you cannot upsert two entities that reference each other within the same transaction.
for example, running this transaction twice causes a datom conflict
(d/transact conn
{:tx-data
[
; a president
{:president/id "The Dude" :db/id "temp-president"}
; a school with a unique-by-identity
; :school/president reference to the president
{:school/president "temp-president"
:school/name "Bowling Academy of Sciences"}
]})
whereas both of these transactions upsert as expected
(d/transact conn
{:tx-data
[; a president
{:president/id "The Dude" :db/id "temp-president"}
]})
(d/transact conn
{:tx-data
[; a school with a unique-by-identity
; :school/president reference to the president
{:school/president 101155069755476 ;<- known dbid
:school/name "Bowling Academy of Sciences"}]})
#2020-11-0315:27joshkh(note the known eid in the second transaction)#2020-11-0314:15vnczIs there any specific reason why some kind of selection can only be done using the Peer Server?#2020-11-0314:16favilaWhat do you mean by “selection”?#2020-11-0314:16vnczLet me give you an example#2020-11-0314:17vncz:find [?name ?surname] :in $ :where [?e :p/name ?name] [?e :p/surname ?surname]#2020-11-0314:17vnczThis query cannot be executed by the peer library#2020-11-0314:18vnczThis one can
:find ?name ?surname :in $ :where [?e :p/name ?name] [?e :p/surname ?surname]#2020-11-0314:18vncz@U09R86PA4#2020-11-0314:19favilaah, ok, those are called “find specifications”#2020-11-0314:20vnczYes, these ones. It seems like the Peer Library can only execute the "Collection of List" one#2020-11-0314:20favilaand it’s the opposite: only the peer API supports these; the client API (the peer server provides an endpoint for the client api) does not#2020-11-0314:20vnczThis is weird, I'm using Datomic-dev (which I guess it's using the peer library?!) and I can't execute such queries#2020-11-0314:21faviladev-local?#2020-11-0314:21vnczYes#2020-11-0314:21favilathat uses the client api. (require '[datomic.client.api])#2020-11-0314:21favilathe peer api is datomic.api#2020-11-0314:22vncz#2020-11-0314:22favilacorrect#2020-11-0314:22favilabut you are using a client api#2020-11-0314:23favilathe client api does not support these#2020-11-0314:23vnczHmm 🤔#2020-11-0314:23vnczOk so in theory I should just change the namespace requirement?#2020-11-0314:23favilano, datomic.api is not supported by dev-local#2020-11-0314:23vnczAh ok so there's no way around it basically#2020-11-0314:24favilaMaybe historical background would help: in the beginning was datomic on-prem and the peer (`datomic.api` ), then came cloud and the client-api, and the peer-server as a bridge from clients to on-prem peers.#2020-11-0314:24favilaMaybe historical background would help: in the beginning was datomic on-prem and the peer (`datomic.api` ), then came cloud and the client-api, and the peer-server as a bridge from clients to on-prem peers.#2020-11-0314:24faviladev-local is “local cloud”#2020-11-0314:24favilathat came even later#2020-11-0314:24favila(like, less than two months ago?)#2020-11-0314:24vnczOh ok, so it's a simulation of a cloud environment. I guess I was confused by the fact that's all in the same process#2020-11-0314:25favilathe client-api is designed to be networked or in-process; in dev-local or inside an ion, it’s actually in-process#2020-11-0314:25vnczGot it. So to keep it short I should either move to Datomic-free on Premise or workaround the limitation in the code#2020-11-0314:26favilaas to why they dropped the find specifications, I don’t know. My guess would be that people incorrectly thought that it actually changed the query performance characteristics, but actually it’s just a convenience for first, map first, etc#2020-11-0314:27favilathe query does just as much work and produces a full result in either case#2020-11-0314:27vnczI could see these conveniente useful though. The idea of having to manually do that every time is annoying.#2020-11-0314:27vnczNot the end of the world, but still#2020-11-0317:09kschltzHi there.
We've been facing an awkward situation with our Cloud system
From what I've seem of Datomic Cloud architecture, it seemed like I can have several databases in the same system, as long as there are transactor machines available in my Transactor group.
With that in mind, we scaled our compute group to have 20 machines, to serve our 19 dbs. All went well for a few months, until 3/4 days ago, when we started facing issues to transact data, having "Busy Indexing" errors.
If Im not wrong this is due to our transactors being unable to ingest data the same pace we are transacting it, or is there something else I'm missing here?
Thanks :D#2020-11-0317:37kschltz@U28A9C90Q#2020-11-0321:18kschltzAnother odd thing is that my Dynamo Write Actual is really low, despite my IndexMemDb metric being really high#2020-11-0321:19kschltzI have 130 Write provisioned, but only 2 is used#2020-11-0322:07tony.kayare you running your application on the compute group? Or are you carefully directing clients to query groups that service a narrow number of dbs? If you hit the compute group randomly for app stuff, then you’re going to really stress the object cache on those nodes.#2020-11-0322:08tony.kaywhich will lead to segment thrashing and all manner of badness#2020-11-0322:09kschltzIm pointing my client directly to compute group#2020-11-0322:10tony.kayyeah, I don’t work for cognitect, but my understanding of how it works leads me to the very strong belief that doing what you’re doing will not scale. Remember that each db needs it’s own RAM cache space for queries. The compute group has no db affinity, so with 20 dbs you’re ending up causing every compute node to need to cache stuff for all 20 dbs.#2020-11-0322:11kschltz@U0CKQ19AQ would you say it would be best if I transacted to a query group fed by a specific set of databases?#2020-11-0322:11tony.kayright, so a given user goes with a given db?#2020-11-0322:12tony.kay(a given user won’t need to query across all dbs?)#2020-11-0322:12kschltzFrom what Ive read, transactions to query groups end up in compute group#2020-11-0322:12tony.kayyes, but that is writes, not memory pressure#2020-11-0322:12kschltzthis application is write only#2020-11-0322:12tony.kaywrites always go to a primary compute node for the db in question. no way around that#2020-11-0322:13tony.kaythe problem is probably that you’re also causing high memory and CPU pressure on those nodes for queries#2020-11-0322:13tony.kayyou could also just be ingesting things faster than datomic can handle…that is also possible#2020-11-0322:13tony.kaybut 20dbs on compute sounds like a recipe for trouble if you’re using that for general application traffic#2020-11-0322:14kschltzI tried shutting my services down and give time to datomic to ingest, but to no avail. IndexMemDB is just a flat line#2020-11-0322:15kschltzI will give your suggestion a try, thanks in advance#2020-11-0322:15tony.kaythere’s also the possibility that the txes themselves need to read enough of the 20 diff dbs to be causing mem problems. I’d contact support with a high prio ticket and see what they say.#2020-11-0322:15tony.kaycould be something broke 🙂#2020-11-0322:17kschltzThe way things are built, there is a client connection for each one of the databases, depending on the body of a tx it is transacted to a specific db#2020-11-0322:18tony.kaythe tx determines the db?#2020-11-0322:18kschltzyes#2020-11-0322:19tony.kayooof. much harder to pin limited dbs to a query group then.#2020-11-0322:19tony.kaygood luck#2020-11-0322:19kschltzThanks#2020-11-0403:16NassinIf each node will be indexing/caching all 19 DBs, what's the point of increasing the node count to 20?#2020-11-0403:27NassinIf the answer is writes, will each DB have a different preferred node for transactions and does cloud tries to distribute this evenly? or will a single node, at any point in time can be the preferred one for multiple databases?#2020-11-0403:37NassinIf it's the latter, sounds like you are better served by creating multiple production stacks to better distribute writes among databases than by increasing the node count for a single production stack (sounds like a nightmare though) or.. instead of increasing to such many nodes, have less nodes but increase their size, ex: to a i3.xlarge#2020-11-0413:40kschltzWe are writing. From what we could gather, theoretically datomic would spread traffic accross nodes since it was writing to different databases. We are stable since we upgraded our stack from 704 to 715. Looks like we were having issues in GC#2020-11-0412:21pithylessUsing Datomic on-prem, I am trying to migrate a :db/ident to a new alias (while keeping the old one for existing code). The docs suggest this is possible: https://docs.datomic.com/on-prem/best-practices.html#use-aliases
Unfortunately, the documented approach asserts the new ident and removes the previous one:
[:db/add :old/id :db/ident :new/id]
This would make sense, since the schema is a cardinality one:
;; => #:db{:id 10, :ident :db/ident, :valueType :db.type/keyword, :cardinality :db.cardinality/one, :unique :db.unique/identity, :doc "Attribute used to uniquely name an entity."}
Was this changed in some version of Datomic and the docs are not up-to-date? Is there a better way to go about introducing backwards-compatible idents? I suppose I could just change the cardinality to many, but not sure if that would break other assumptions and/or performance?#2020-11-0412:40favilaIdent lookups are special because they ignore retractions. Go ahead and try it: (d/ident db :old/id)#2020-11-0412:42favilaCardinality many wouldn’t solve the problem: it would just make it ambiguous which ident was the preferred vs deprecated one#2020-11-0412:43favilaIt also wouldn’t solve the problem of moving an ident to a different attribute#2020-11-0414:29pithylessThanks @U09R86PA4; what threw me off was querying [?e :db/ident :old/id] returned an empty set; it would only find it via [:?e :db/ident :new/id]. But that makes sense if the idents are special via ignoring retractions.#2020-11-0414:30favilaquerying won’t act like this---only ident resolution#2020-11-0414:30pithylessQuerying for [?e :old/id ...] and [?e :new/id ...] does work. But I've still got to debug why it's not working with my datofu/conformity migrations.#2020-11-0414:31pithylessThanks for pointing me in the right direction!#2020-11-0519:10dogenpunkHas anyone run into this error using the datomic CLI?
Syntax error (FileNotFoundException) compiling at (clojure/core/async/impl/ioc_macros.clj:1:1).
Could not locate clojure/tools/analyzer__init.class, clojure/tools/analyzer.clj or clojure/tools/analyzer.cljc on classpath.
I’m coming back to a datomic cloud project after ~8mo. The datomic script loads tools.ops version 0.10.82. Seems to only occur if I run datomic commands from my project directory, so I assume this is an issue with the project deps.edn.#2020-11-0519:55Alex Miller (Clojure team)you might need to update to latest version of the clojure tools (clj) - there were some dependency issues in past versions that would prevent necessary transitive deps from being included in the classpath#2020-11-0519:56Alex Miller (Clojure team)the error above could definitely be a sign of that#2020-11-0520:21dogenpunkHmmm… clojure -h reports version is 1.10.1.727 which seems to be the latest. This is obviously not critical as I can just run from outside the project directory. I’m mostly worried that I screwed something up when upgrading datomic, ions, etc. Thanks for the help though!#2020-11-0521:11Alex Miller (Clojure team)yep, that should be the latest#2020-11-0521:12Alex Miller (Clojure team)if you want to ask at https://ask.datomic.com that would be a good place to file a question - would be helpful to include exactly what you ran (and your deps.edn if relevant)#2020-11-0812:17dogenpunkThanks, Alex. I’ll do that. #2020-11-0616:17joshkhjust nudging this post because we are keen to make use of all the great things dev-local has to offer 😇 https://forum.datomic.com/t/execution-error-when-importing-from-cloud-with-dev-local-0-9-225#2020-11-0616:20joshkhdev-local is our only best option right now to satisfy some customer requirements regarding backups, so any help would be much appreciated#2020-11-0618:26Jon WalchFor index-pull is it possible to specify multiple attributes and their values with :start? I want to index-pull all entities where every entity has an attribute's value as x and another attribute is a numerical value in sorted order.#2020-11-0618:32Jon WalchI basically want this returned:
[
{:x :foo
:y 1000}
{:x foo
:y 9780}
...
]#2020-11-0619:14g3oHello, today I started finally playing around with Datomic, and when I open the db.log file I see some weird chars, is that normal#2020-11-0619:27Alex Miller (Clojure team)sure, it's binary data#2020-11-0619:41g3ooh I see. is there a way to make this more readable?#2020-11-0619:45Alex Miller (Clojure team)no? what are you trying to do?#2020-11-0619:45g3onothing special, just curious what is that file.#2020-11-0622:02jjttjjanyone else having trouble with the datomic-cloud maven repo? I'm trying to run https://github.com/Datomic/ion-starter but seemingly keep timing out when downloading the jars when trying to start a repl#2020-11-0701:06tony.kayI am. I’m trying to update topology to prod, and it keeps timing out there#2020-11-0718:43joshkhany luck? what is timing out?#2020-11-0719:43tony.kaywell, not yet. The code deploy step: “Script at specified location: sync-libs failed to complete in 120 seconds” and the logs are trying to do an s3 copy#2020-11-0719:43tony.kayI realized that I need to up my i3 quota by one, so I’m waiting for that to finish before trying again#2020-11-0701:07tony.kaybeen wasting life on this all afternoon 😞#2020-11-0718:42joshkhis there a way to return the current base schema version of a cloud db without upgrading it?#2020-11-0801:37ivangalbansHi everyone,
I’m trying to use attribute predicates in a project and i’m following https://docs.datomic.com/on-prem/schema.html#attribute-predicates
I have a running transactor with datomic-pro-0.9.6045 :
bin/transactor config/samples/dev-transactor-template.properties
and I have a simple file as in the doc:
(ns datomic.samples.attrpreds
(:require [datomic.api :as d]))
(defn user-name?
[s]
(<= 3 (count s) 15))
(def user-schema
[{:db/ident :user/name,
:db/valueType :db.type/string,
:db/cardinality :db.cardinality/one,
:db.attr/preds 'datomic.samples.attrpreds/user-name?}])
(def uri "datomic:")
(d/create-database uri)
(defonce conn (d/connect uri))
@(d/transact conn user-schema)
This file is outside the Datomic directory. I have a running repl via cider-jack-in-clj.
I have evaluated the buffer and the schema is installed as expected, but when I run
@(d/transact conn [{:user/name "X"}])
I get the following error:
> Could not locate datomic/samples/attrpreds__init.class, datomic/samples/attrpreds.clj or datomic/samples/attrpreds.cljc on classpath.
I have read this in the doc:
> Attribute predicates must be on the classpath of a process that is performing a transaction.
But I don’t know how to do it. I have a proof of concept project with datomic-pro-0.9.6045 (dev), cider, deps.edn…
Should I set the DATOMIC_EXT_CLASSPATH variable to datomic/samples/attrpreds.jar (or .clj if possible) before run the transactor ?
How do you configure your projects in this case ?
Thanks in advance#2020-11-0811:37souenzzo@ivan.galban.smith for learning, use datomic:
For "production", you will generate an artifact or something like and do that classpath thing#2020-11-0817:33ivangalbansis there a workaround ?
I avoid using mem because I don’t wanna lose the datomic console. Persistent data is not important to me at this moment, although I would like to have it too#2020-11-0817:41ivangalbansi understand what you say and generate an artifact is overkill for my purpose.#2020-11-0816:00joshkhthis one is driving me a little crazy. given the following tree:
a -> b -> c -> d -> e
how might i write a rule to find the most ancestral entity of e that has :item/active? true? in this example, the result would be b
[; schema
{:db/ident :item/id
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
{:db/ident :item/children
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many}
{:db/ident :item/active?
:db/valueType :db.type/boolean
:db/cardinality :db.cardinality/one}
; entities
{:item/id "a"
:item/children ["b"]}
{:db/id "b"
:item/id "b"
:item/children ["c"]
:item/active? true}
{:db/id "c"
:item/id "c"
:item/children ["d"]}
{:db/id "d"
:item/id "d"
:item/children ["e"]
:item/active? true}
{:db/id "e"
:item/id "e"}]
here is my starting point, which at least finds all ancestors of e (where as i want the just farthest ancestor from e that is active, which is b)
(let [rules '[[(anc ?par ?child)
[?par :item/children ?child]]
[(anc ?anc ?child)
[?par :item/children ?child]
(anc ?anc ?par)]]]
(d/q '{:find [(pull ?anc [*])]
:in [$ % ?item-id]
:where [[?n :item/id ?item-id]
(anc ?anc ?n)]}
(db) rules "e"))#2020-11-0823:19benoitFrom a logical point of view, you're looking for an ancestor that is active and that itself doesn't have an active ancestor. So I would write a rule to check if an entity has an active ancestor and negate it.#2020-11-0915:31joshkhthat sounds like a good way to approach it. thanks.#2020-11-0916:14xcenoI have an application running as datomic ion. I'll need to upload a bunch of binary files between 1MB-100MB. There's a stack overflow answer (https://stackoverflow.com/a/10569048/932315) that brings up some points I'd like to clarify:
• Is it a good idea to store blobs in datomic if I disable the history for them?
• Does it make sense to store files as a datomic byte array?
• Or should I rather upload the files to S3 and save the URL in an attribute?#2020-11-0916:31NassinLast one, cloud doesn't have the byte array type#2020-11-0916:33NassinIn on-premise it's only used for very small binary data anyway#2020-11-0916:35joshkhseconded - put those files in S3 where they belong#2020-11-0916:36xcenoAlright, thanks guys!#2020-11-1013:24souenzzothere is docs about how to do permissions on datomic-ions?
Last time I tryied there is no docs, my solution break the cloudformation and I get 1 day of downtime#2020-11-1013:29xcenoIt took me almost two weeks to get my initial setup up and running. I initially went with solo but upgraded to production later on. Anyhow, I have my own permission / auth system right now as ring-middleware. It provides the very basics, but if I find time I want/need to switch over to using AWS Cognito.
From what I've seen in the forums there are some people using Ions + Cognito in production, but there aren't any docs or examples in the wild. At least I haven't found any, if you do, please let me know#2020-11-1716:19joshkhonly just saw this thread, but in case you haven't found an answer yet @U2J4FRT2T, can you clarify by what you mean as permissions? user permissions to your api? permissions for your ion to access other AWS services?#2020-11-1718:10souenzzoHow to customize the IAM of the machines created by DatomicCloudCloudFormation template
It isn't just "find the the group and add the permission"
If you do that (like i did) you will not be able to remove/upgrade the CloudFormation because it will fail#2020-11-0918:49camdezIf I transact against a connection, then get a db value from that connection via datomic.api/db , and then query that db, are the newly transacted values guaranteed to be included in the db queried?#2020-11-0918:52camdez(Note that I’m not talking about explicitly using the :db-after value here.)#2020-11-0918:59favilaThat db is guaranteed to be at or after. Because other transactions may have happened in the meantime, you’re not guaranteed that any particular value is in there#2020-11-0919:01favilaput it differently: the peer receives transaction updates in-order via a single queue. Your d/transact future finalizes when it sees the result of its request on that queue. So it’s not possible for a d/db call to not see that tx yet if it ran after the future finished#2020-11-0919:02favilayou can access this queue yourself via d/tx-report-queue#2020-11-0919:09camdezThanks, @U09R86PA4! That’s what I meant to ask, I just worded it poorly. I’ve been operating under this assumption for a while and had a bug today that had me questioning my sanity. 😛 Just got it figured out though. Much appreciated.#2020-11-1012:39geodrome“High Availability (HA) is a Datomic Pro feature for ensuring the availability of a Datomic transactor in the event of a single machine failure.” according to https://docs.datomic.com/on-prem/ha.html. “All Datomic On-Prem licenses are perpetual and include all features…” including High Availability for Failover according to https://www.datomic.com/get-datomic.html. Please clarify whether HA is included with Datomic Starter. Thanks.#2020-11-1012:39geodrome“High Availability (HA) is a Datomic Pro feature for ensuring the availability of a Datomic transactor in the event of a single machine failure.” according to https://docs.datomic.com/on-prem/ha.html. “All Datomic On-Prem licenses are perpetual and include all features…” including High Availability for Failover according to https://www.datomic.com/get-datomic.html. Please clarify whether HA is included with Datomic Starter. Thanks.#2020-11-1013:04jaretYes, you can use HA with a Datomic starter license.#2020-11-1014:15favilaWith the client API, is there any equally-performant alternative to seek-datoms to find a next-v in an :aevt index? (d/seek-datoms db :aevt :known-attr some-e) . I’ve tried index-pull with :aevt, but it requires :a to be cardinality-many (?!); Query with > and <= seems to not-work quickly (which I didn’t expect: I expected either too slow or error)#2020-11-1014:17favilaExample use case: given a tx value (which may fall between actual existing tx ids), find the :db/txInstant of the same e or the nearest-next one. In this particular case you can use the log (although this seems less efficient), but I have other cases besides :db/txInstant where I do this in the peer api for performance.#2020-11-1015:13jarethttps://docs.datomic.com/cloud/query/query-index-pull.html#aevt
> :v must be `db.type/ref` and `:db.cardinality/many`#2020-11-1015:13jaretIs that what you meant with trying :aevt? ^#2020-11-1015:15favilaI mean I want to start matching (fuzzily) on e#2020-11-1015:15favilapeer code: (-> (d/seek-datoms db :aevt :db/txInstant tx) first :v)#2020-11-1015:15favilahow would I do that efficiently with the client api?#2020-11-1015:18favilaThe error I got using dc/index-pull was `Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
:db/txInstant is not card-many, as required for :aevt`#2020-11-1015:19favilaI see now the error message was just misleading, the problem is really that it’s not a ref attr#2020-11-1015:28jaretYeah, I think index-pull is the answer here, but that error message is something I want to look at and I am going to talk with the team to see if there is a more efficient way that doesn't have the requirements of index-pull.#2020-11-1015:33favilaI’m not sure how index-pull could be the answer, as I would need to pull the second element not the third#2020-11-1015:34favilaI would want to pull from the e in the :aevt, not the :v#2020-11-1015:34favila(actually I don’t want to pull at all-I just want the e and v)#2020-11-1015:14Carey HayHello! Our organisation has recently migrated to kubernetes using Amazon EKS. We are running the datomic transactor inside a pod in k8s and are hoping to also create a cronjob to create database backups to s3 using the standard backup commands. Has anyone done this succesfully using IAM Roles for service accounts? https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html
We are using these succesfully for other applications to negate the need to mount aws credentials files into pods, but datomic does not seem to be able to interact with the auth tokens that are created in each pod as tokens. According to the supported sdk list, an application requres the following:
• Java (Version 2) — https://github.com/aws/aws-sdk-java-v2/releases/tag/2.10.11
• Java — https://github.com/aws/aws-sdk-java/releases/tag/1.11.704
The most recent reference to the aws sdk in the change log is here: https://docs.datomic.com/on-prem/changes.html#0.9.5561.50, citing "Peers and transactors now use version 1.11.82 of the AWS SDK". Depending on what sdk is used, it may or may not be supported!#2020-11-1208:08Ivar RefsdalHi. And thanks for a fine piece of software!
I'm having a problem with excision and on-prem:
My history database (eavto) looks like this:
[17592186045418 :m/info "secret data that should be removed" 13194139534313 true]
[17592186045418 :m/info "secret data that should be removed" 13194139534315 false]
[17592186045418 :m/info "OK data" 13194139534315 true]
Then I execute excision:
{:db/excise 17592186045418,
:db.excise/attrs [:m/info],
:db.excise/beforeT 13194139534315}
After waiting for syncing of excision, my history database looks like this:
[17592186045418 :m/info "secret data that should be removed" 13194139534315 false]
[17592186045418 :m/info "OK data" 13194139534315 true]
Thus the bad secret data is still present in the history, but only the retraction, which does not make sense in my opinion.
Is it possible to fix this? To also get rid of the retraction information?
Here is a gist that reproduces this issue:
https://gist.github.com/ivarref/f92d9efd45d1c0cbd2d239bf4904a323
Thanks and kind regards.#2020-11-1208:12Ivar RefsdalCC @UQGBDTAR4 @U11RVUGP7#2020-11-1212:14favilaBeforeT is not inclusive#2020-11-1212:15favilaThe history items that remain have a tx == your beforeT argument to the excision#2020-11-1217:38Ørnulf Risnes@U09R86PA4 (I'm a colleague of @UGJE0MM0W)
Thank you for the response.
If you look at Ivar's example, you will see that the problem is that the retraction of the problematic datom that we want to excise and its benign counterpart that we want to keep - they have the same tx.
This typically happens when we add a new value to an attribute with cardinality one.
So - beforeT isn't expressive enough to distinguish between the retraction of the problematic value and the adding of the benign value.#2020-11-1217:41favilaAh I understand your problem now.#2020-11-1217:41favilaYes, it’s not expressive enough. There’s no way to get exactly what you want.#2020-11-1218:03Ørnulf Risnes@U09R86PA4 Thank you again.
Since the first entry of the datom in the post-excision history now is a (logically invalid) retraction, we were hoping for some kind of "garbage collection" mechanisms to rescue us here, and help us get rid of the problematic value completely.
Will send a question about possible workarounds to Datomic support.
(Cc @U1QJACBUM)#2020-11-1218:04favilaI suspect any gc or reindex mechanisms, even if they remove the item from the index, will not remove them from the tx-log#2020-11-1218:05favilaexcision is special in that it alters tx-log entries; even noHistory doesn’t do that#2020-11-1218:35Ivar RefsdalThanks @U09R86PA4 and @UQGBDTAR4
I've noticed the following:
retraction is about existing data, thus it does not make sense to keep [17592186045418 :m/info "secret data that should be removed" 13194139534315 false] in the history database.
Ref: https://docs.datomic.com/on-prem/transactions.html#retracting-data
If I do
@(d/transact conn [[:db/retract "item-that-does-not-yet-exist" :m/info "secret data"]])
this will be silently discarded, which is OK, though I would prefer an exception. It does not end up in the history database.
In this respect I think there is a mismatch between retract and excision, and I think the excision logic should be improved with the following: the new database history of the excised entity and attribute should never contain a retraction in the first transaction. This simple rule would solve the problem (I think!).
Thanks and kind regards again.#2020-11-1219:23favilaI don’t speak for cognitect, but because this alters transactions which happened after the beforeT, I can see this as a semantic grey area about the meaning of excision#2020-11-1219:23favilait’s probably also a performance concern because many more datoms and transactions need inspection#2020-11-1219:25favilayour rule is also too simple for cardinality-many attributes#2020-11-1308:09Ivar RefsdalI agree it's a grey area. I wouldn't be too concerned about performance as excision is a seldom thing, but yes I do not know the performance / implementation implications of this suggestion.
Are you sure the rule is too simple?
Why would the first transaction of an entity's attribute (cardinality-many) have any retractions? It's the equivalent of:
@(d/transact conn [[:db/retract "new-item" :m/many "data-1"]
[:db/retract "new-item" :m/many "data-2"]])
which does not make sense.
Or did I miss something?#2020-11-1316:30favilawhat I mean is that the retracts may be spread throughout the transaction history after the excision time. You need to know what values used-to-be asserted at moment T, and you need to look for the first retraction or assertion of any of those values forward in time. For cardinality-many, there won’t be just one transaction. You can terminate early if all values are accounted for, not on the first transaction#2020-11-1316:30favilain the worst-case, a value is never retracted later, so you scan all of time#2020-11-1214:32avocadeHey guys! Anyone else having an issue when using expound and datomic dev-local's (d/db conn) value in specs (either directly, or using guardrails/ghostwheel which wraps expound)?
We filed an issue on it here for reference: https://github.com/bhb/expound/issues/205#2020-11-1216:10Michael Stokleythese are logically equivalent. are they equivalent from a perf standpoint?
(d/q '[:find ?e
:in $ ?id
:where [?e :e/id ?id]]
db id)
;; vs
(d/q '[:find ?e
:in $ ?e]
db [:e/id id])
where `[:e/id id]` is a lookup ref#2020-11-1216:23tatutI don’t think they are completely equivalent, the first will return a :db/id number and the latter will just return the lookup ref as is in the results#2020-11-1216:28Michael Stokleyah, you're right.#2020-11-1216:29Michael Stokleyi should have included a pull pattern in the example#2020-11-1216:29tatutso I would think the latter should be faster as it does nothing#2020-11-1216:29Michael Stokleymy question is more around whether it matters to pass in the unique identifier or the lookup ref#2020-11-1216:29tatutyou can give the latter a non-existing lookup ref and it just happily returns it#2020-11-1216:31Michael Stokley(d/q '[:find (pull ?e pull-pattern)
:in $ ?id pull-pattern
:where [?e :e/id ?id]]
db id pull-pattern)
;; vs
(d/q '[:find (pull ?e pull-pattern)
:in $ ?e]
db [:e/id id] pull-pattern)#2020-11-1216:31Michael Stokleydo you think there would be a performance difference in the above?#2020-11-1216:32tatutfeels to me that there shouldn’t be, but I don’t really know#2020-11-1216:32tatutand if there is, it is likely negligible#2020-11-1216:33tatutbut in both cases, if you have a lookup ref, wouldn’t you just use (d/pull db pattern id)instead of q?#2020-11-1216:41Michael Stokleyyou could. it's a bad example, sorry. in truth, the real code has additional where clauses, so it's a real query.#2020-11-1216:21ziltiIs there an usable tutorial somewhere on how to set up Metabase with Presto?#2020-11-1216:35ziltiI've set everything up, but all I get is
Nov 12 16:35:28 the-network java[12958]: 2020-11-12 16:35:28,337 ERROR driver.util :: Database connection error
Nov 12 16:35:28 the-network java[12958]: java.io.EOFException: SSL peer shut down incorrectly
#2020-11-1219:05respatializeddata modeling question: is there any semantics for disjoint attributes in Datomic - something like "an entity can have attribute x or attribute y, but not both"? Or is that anathema to the open composition of attributes that Datomic's data model encourages and those constraints should be left up to the application?#2020-11-1220:30benoitYou cannot express this constraint with the Datomic schema attributes but you can always enforce it with a custom database function.
Whether it is a good idea from a logical perspective, I'm not sure. This looks like a sum type to me.
You can also think about other ways to implement it like creating a ref attribute that points to an entity that can have the x or y attribute.#2020-11-1316:31marshall@jdkida you can use the pull API directly
https://docs.datomic.com/on-prem/pull.html (onprem)#2020-11-1316:31marshallhttps://docs.datomic.com/cloud/tutorial/read.html#pull (cloud)#2020-11-1316:33jkidaahh, i see. eid or (unique-id) work?#2020-11-1316:33marshallor lookup ref#2020-11-1316:33marshallhttps://docs.datomic.com/on-prem/identity.html#entity-identifiers#2020-11-1316:33marshallit takes an entity identfier#2020-11-1317:18gabor.veresHi all, newbie question: does Datomic support "ordered" :db/cardinality many attributes? I'd like to store a vector of values, and somehow retrieve the same, ordered vector. The actual use case would be an entity that refers to other entities, but those references do have a defined order. I can't seem to find a way to do this on the data model/schema level. Is this an application/client concern rather, meaning I store data required to reconstruct the order and reorder after retrieval?#2020-11-1518:56val_waeselynckCheck out Datofu, it has helpers for that IIRC#2020-11-1317:26Braden Shepherdsonwell, the underlying indexes are always sorted, but that order doesn't necessarily survive in a query.#2020-11-1317:27Braden Shepherdsongenerally you have to do your own sorting in memory. if there's some arbitrary order (say, tracks on an album) then you need to record those as attributes.#2020-11-1317:30Braden Shepherdsonputting it slightly differently, you might transact {:foo/id (uuid "...") :foo/things [19 12 22]} but that's just a shorthand. it swiftly gets unpacked to a set of entity-attribute-value triples, and the order of your vector is lost. it's just a set to Datomic.#2020-11-1317:39gabor.veresThanks @braden.shepherdson, that's what I suspected - this is an application level concern then.#2020-11-1318:14favilaIf you need control over partially-fetching items in a certain order, use d/index-pull#2020-11-1610:03hanDerPederAny harm in transacting a schema multiple times?#2020-11-1612:22souenzzoI transact on every application start (even on elastic ones)#2020-11-1613:43vnczI also do the same and I have not noticed any problem#2020-11-1715:47tvaughanSame#2020-11-1616:14Michael Stokleyis calling d/db to create a db from a conn expensive?#2020-11-1616:29favilaNo#2020-11-1616:30favilaYou should think more about consistent values for a unit of work than about the expense of creating a db object: https://docs.datomic.com/on-prem/best-practices.html#consistent-db-value-for-unit-of-work#2020-11-1616:31favilaalso by passing down a db you guarantee that the entire subtree of function calls cannot transact#2020-11-1616:32favila(so you don’t have to worry about accidental writers)#2020-11-1618:07Jakub Holý (HolyJak)Hello! I would like to start playing with Datomic. I have this project, clj_tumblr_summarizer , that will run monthly as AWS Lambda, fetch fresh posts from Tumblr, and store them somewhere for occasional use. Now I would like the "somewhere" to be Datomic. It is far from the optimal choice but I want to learn it 🙂
So my idea is to use dev-local and storing its data to S3 (fetch it at the lambda start, re-upload when it is done).
My question is: Is this too crazy? Thank you!#2020-11-1619:20ghadiYes too crazy because of concurrency #2020-11-1620:08Jakub Holý (HolyJak)Thank you. Could you be so kind and expand on that a little? Do you mean it breaks when concurrent access is attempted? I don't think I have any concurrency there..#2020-11-1620:41ghadiYou’d have to guarantee that the lambda is not being concurrently called#2020-11-1620:42ghadiAt which point it would be better to just use datomic proper or ddb#2020-11-1620:49Jakub Holý (HolyJak)Well, the lambda is run once a month by a schedule so I wouldn't worry about that. Yeah, dynamodb is much bette choice but then I don't get to learn Datomic 😢#2020-11-1618:10gdanovhi, is there any performance or other difference how 1-n relations are implemented? refs 1 --> n or the other way round?#2020-11-1618:59Braden ShepherdsonBecause of the VAET reverse lookup index, there's no major performance impact here either way I think, provided you write your :where clauses properly.
think about how you'd write the query for each case (have child find parent, have parent list children, etc.), and you'll see they work out about the same.#2020-11-1619:01gdanovthanks...what would be 'improper' :what clause in this case? I'm asking exactly because query-wise there's no difference#2020-11-1619:01Braden Shepherdsonoh I just meant the usual principles of writing your :where clauses so that they (a) start as specific as possible, and (b) always have overlap between one line and the next, so you don't get a big cross product.#2020-11-1619:02Braden Shepherdsonyou're right, it doesn't really matter which way around you model the relationship, the where clause is just swapped around.#2020-11-1619:03gdanovgot it. I typically navigate to specific 'child' node from the 'master' so was thinking that maybe it's more efficient to have the ref on the child#2020-11-1619:03Braden Shepherdsonit's worth noting: if the children are :db/isComponent and should be deleted if their parent is deleted, then you want a list of children refs on the parent.#2020-11-1619:07gdanovhow about
[:find ?parent ?child
:in $ ?param ?child-param
:where
[?parent :some/attrib ?param]
[?child :has/a ?parent] ;; or the other way round
[?child :other/attrib ?child-param]
#2020-11-1619:07gdanovyes, I really don't see what difference it could make#2020-11-1619:08Braden Shepherdsonthat's perfectly fine. what you want to avoid is this order:
[?parent :some/attrib ?param]
[?child :other/attrib ?child-param]
[?child :has/a ?parent]
because that finds all plausible parents, and all plausible children, and then finally just the intersection.#2020-11-1619:08Braden Shepherdsonbut that's a general query design thing and doesn't really have anything to do with 1-n relationships.#2020-11-1619:10gdanovyes you are right. my thinking is still SQL influenced sometimes and I get weird feelings and need to double-check#2020-11-1619:12Braden ShepherdsonI'll remark, finally, that the "parent with list of children" approach actually makes a n-n relationship, in principle. it's just a coincidence if every child appears in the list of exactly one parent. having a :db.cardinality/one parent attribute on each child makes it certain that it's 1-n.#2020-11-1619:13gdanovgood one, this is important if I need to enforce restriction. thanks!#2020-11-1703:21onetomHow can we restrict client apps / IAM users to only have access to certain databases?
The https://docs.datomic.com/cloud/operation/access-control.html article defines the DbName metavariable at the beginning, but then it's not mentioned afterwards.
It does have a section called Authorize Client Applications, linking to https://docs.datomic.com/cloud/operation/client-applications.html , but that page doesn't mention DbName either.
Is it not possible to restrict access to certain dbs or it's just not documented?#2020-11-1716:23joshkhi'd like to know this as well. i had started defining a policy to grant access to just certain access keys in the datomic s3 bucket, but in the end gave up (admittedly after not much trial and error)#2020-11-1704:17mruzekwHas anyone been able to install dev-local on a Windows machine (not WSL or VM)?#2020-11-1704:30mruzekwLooks like Powershell is particular about . in args. When you run the mvn commands from ./install wrap the whole -Dfile arg in quotes (`"-Dfile=…"`)#2020-11-1715:01jaretI was able to get Dev-local running on windows 10, using powershell. I created the .datomic\dev-local.edn file and populated with:#2020-11-1715:01jaret{:storage-dir "C:\\Users\\<COMPUTER NAME>\\dev-local-proj\\storage"}#2020-11-1715:07jaretalternatively you can specify the storage dir which is what is contained in the .datomic folder.#2020-11-1715:07jaret(def client (d/client {:server-type :dev-local
:storage-dir "C:\\Users\\<COMPUTER NAME>\\dev-local-proj\\storage"
:system "dev"}))#2020-11-1716:56mruzekwThanks, jaret!#2020-11-1814:10souenzzoHello
Can I do something better then this or-join to create the ?in-cart? value?
This do not look like right.
'[:find ?name ?in-cart?
:keys :item/name :item/in-cart?
:where
[?item :item/name ?name]
(or-join [?item ?in-cart?]
(and [_ :cart/item ?item]
[(ground true) ?in-cart?])
(and (not [_ :cart/item ?item])
[(ground false) ?in-cart?]))]
Full runnable example here
https://gist.github.com/souenzzo/ebd049a99443883ebab180ff019400ba#2020-11-1814:13favilaThis is what I would do. Why does it not look right?#2020-11-1814:15favilaI was going to suggest get-else as another possibility, but the reference is in the wrong direction#2020-11-1814:15souenzzoor > and // and not feels to nested. Not sure if there is some missing? or any other function to help#2020-11-1814:16favilayou could use a named rule, that would eliminate all the nesting{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2020-11-1814:17favila'[[(item-in-cart? [?item] ?in-cart?)
[_ :cart/item ?item]
[(ground true) ?in-cart?]]
[(item-in-cart? [?item] ?in-cart?)
(not [_ :cart/item ?item])
[(ground false) ?in-cart?]]]
#2020-11-1814:18favila… :where [?item :item/name ?name](item-in-cart? ?item ?in-cart?)#2020-11-1814:19souenzzoRules can have the same name, then they will behave as a "or"? 😮
There is more examples/docs about this?#2020-11-1814:41favilahttps://docs.datomic.com/on-prem/query.html#rules#2020-11-1814:41favila> Rules with multiple definitions will evaluate them as different logical paths to the same conclusion (i.e. logical OR). Here’s a rule, again from the Seattle example, which identifies communities that are “social-media”.#2020-11-1814:42favilaThis was the only way to express “or” before or, and etc were added#2020-11-1814:42favilathese are actually just syntax sugar for anonymous rules#2020-11-1814:42favila(with gensym-ed names)#2020-11-1815:37manutter51Am I doing something wrong? I want to retract all values of the group attribute:
(d/q '[:find ?group
:where [17592186048817 :group ?group]]
db)
=> #{[#uuid"5ede7e84-c6ac-4116-82d4-0f6dfae77b9d"]}
@(d/transact conn [[:db/retract 17592186048817 :group]])
Execution error (IndexOutOfBoundsException) at datomic.db.ProcessInpoint/inject (db.clj:2472).
#2020-11-1815:48favilaMaybe double-check your version. This feature is relatively recent#2020-11-1815:49favilahttps://docs.datomic.com/on-prem/changes.html#0.9.6045#2020-11-1815:49favilahttps://docs.datomic.com/cloud/releases.html#616-8879#2020-11-1815:49favilayou need those versions or greater#2020-11-1815:50manutter51Ah, ok, we are using an older version, thanks much#2020-11-1815:51manutter51Yup, that was it, thanks again.#2020-11-1913:51joshkhnil is meant to be allowed in heterogenous tuples, right?
for example, let's say i have a tuple like this:
[:db.type/ref :db.type/ref]
and nil in the second value
{:person/parents [123456 nil]}
i get an exception when i untuple and bind on the nil value
(d/q '{:find [?person ?parent-a-name]
:in [$]
:where [
[?person :person/parents ?parents-tuple]
[(untuple ?parents-tuple) [?parent-a ?parent-b]]
[?parent-a :person/name ?parent-a-name]
[?parent-b :person/name] ;<-- throws exception
]}
db)
Execution error (NullPointerException) at datomic.core.db/asserting-datum (db.clj:-1).
null
clojure.lang.ExceptionInfo: processing clause: [?parent-b :parent/name], message: #:cognitect.anomalies{:category :cognitect.anomalies/incorrect, :message "processing clause: [?parent-b :parent/name], message: "}
#2020-11-1919:40kschltz@U0GC1C09L I'm afraid you can't have nil there#2020-11-1919:41kschltzIf I'm not mistaken the value just would not be there#2020-11-1919:41kschltz{:parents [123456]}#2020-11-1919:56joshkhthat makes sense to me. do you know if that is the official stance? i am allowed to both transact and pull nil values. as for binding i would have expected the query to not unify instead of throw an exception#2020-11-1920:13kschltzI believe you get that exception because theres actually a value (nil), which makes the whole hting unify but it wont comply to your clause :person/name#2020-11-1920:14kschltzThat would be my guess#2020-11-1920:54joshkhfair enough 🙂#2020-11-1920:56joshkhjust a funny observation: when retracting an entity referenced in a tuple, the tuple retrains the entity db/id even after the target has been retracted. that feels equally strange to me but for other reasons.#2020-11-1920:58kschltzNot sure why, but I think if you look at it like independent datoms, kinda makes sense. Its not like a component, where you retract the mother entity and the child follows along. But you have a fair point#2020-11-1921:09joshkhalso from Jaret on the forums (at least regarding homogenous tuples)
> Additionally you may be interested in knowing that, `nil` is a legal value for any slot in a tuple. This facilitates using tuples in range searches, where `nil` sorts lowest.
https://forum.datomic.com/t/tuple-with-single-item-fails-transaction/1690{:tag :div, :attrs {:class "message-reaction", :title "open_mouth"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😮")} " 3")}
#2020-11-1921:10joshkhmaybe i'll make an ol' posteroo on the forums#2020-11-1921:11kschltzI hope Jaret get a raise soon enough, he's always the one to respond my tickets xD#2020-11-1921:13joshkhJaret and Marshall both do a great job of putting up with my bs 😎#2020-11-1921:13kschltzcouldn't say it better#2020-11-1921:36joshkh(in case someone from Cognitect reads the thread, i've posted the question here: https://forum.datomic.com/t/nil-value-in-heterogeneous-tuple-throws-a-nullpointerexception/1693){:tag :div, :attrs {:class "message-reaction", :title "smile"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😄")} " 3")}
#2020-11-1922:32favilaI think this is just a mismatch between datalog unification and how tuples are handled. Since this is a composite attribute, consider reading the source attribute value instead of the composite#2020-11-1922:34favila(I’ve often wished for composite attributes to have a “not nillable” option, so that they are not asserted unless all values are present)#2020-11-2009:43joshkhi don't think this is a composite tuple though. this is a heterogenous tuple.#2020-11-2013:04favilaHeterogeneous tuples cannot contain refs#2020-11-2115:17joshkhhmm, i don't doubt what you're saying.. maybe i'm just confused here. i thought this is a heterogeneous tuple (from the schema in the forum post above):
{:db/ident :person/parents
:db/valueType :db.type/tuple
:db/tupleTypes [:db.type/ref :db.type/ref]
:db/cardinality :db.cardinality/one}
which allows refs, and works with transactions and pulls. are you saying that hetero tuples shouldn't contain refs, rather than cannot contain refs?#2020-11-2313:34favilathey shouldn’t, and I thought they could not#2020-11-2313:34favilaif you read the docs, they make no mention of this#2020-11-2313:34favilaalso conceptually, it’s bad: ref values are supposed to be managed by datomic--this is no different than putting a long into a tuple#2020-11-2313:35favilawith composite tuples, it knows what assertion it’s denormalized from; here, there is no support. and you can’t use lookup refs or keywords or tempids to reference these#2020-11-2313:36faviladocs: https://docs.datomic.com/on-prem/schema.html#tuples#2020-11-2313:37favilaI’m wrong though, :db.type/ref is lised as a scalar type#2020-11-2313:38favilaI am pretty sure it wasn’t the last time I read this--maybe a change? Anyway, it still seems like a bad idea#2020-11-2313:38favilabut if you really need to do it, queries needs to be defensive against nils#2020-11-2313:41favila[(!= ?x nil)] immediately might work, [(some? ?x)] should definitely work.#2020-11-2616:46joshkhhey favila, both of your solutions worked. thanks. in my case i'm storing some rankings in tuples...
{:race/finishers [[0 sally-ref coach-1-ref]
[1 bob-ref nil]
[2 jane-ref coach-2-ref]]}
of course the alternative (and more verbose) solution is to have separate ranking entities that store all of the context (the race, the people involved, and a rank), but as an experiment i thought tuples might be an interesting solution. in my example i don't see why storing references in tuples seems like a bad idea.#2020-11-2009:10tatutcloud analytics isn’t connecting for me, idk how to troubleshoot this, ssh says channel 2: open failed: connect failed: Connection refusedso I’m guessing the presto stuff isn’t running on the gateway properly on 8989 port#2020-11-2009:12tatutseems presto-server can’t start:
[2020-11-20T11:12:05+02:00] (analytics-launcher-i-0d55456c7d45d2f92) Starting presto-server
[2020-11-20T11:12:05+02:00] (analytics-launcher-i-0d55456c7d45d2f92) java_props: -Dnode.id=ab602e6a-f4b7-47fc-840d-79cf355f6374 -Ddatomic.client.config='{:server-type :cloud :region "eu-central-1" :system "teet-datomic" :endpoint ""}'
[2020-11-20T11:12:05+02:00] (analytics-launcher-i-0d55456c7d45d2f92) presto_props --etc-dir=/opt/presto-etc --node-config=/opt/presto-config/node.properties --jvm-config=/opt/presto-config/jvm.config --config=/opt/presto-config/config.properties
[2020-11-20T11:12:06+02:00] (analytics-launcher-i-0d55456c7d45d2f92) Started as 18945
[2020-11-20T11:12:12+02:00] (analytics-launcher-i-0d55456c7d45d2f92) Exception in thread "main" java.lang.IllegalArgumentException: Cannot parse version 11.0.9.1
at io.prestosql.server.JavaVersion.parse(JavaVersion.java:76)
at io.prestosql.server.PrestoSystemRequirements.verifyJavaVersion(PrestoSystemRequirements.java:98)
at io.prestosql.server.PrestoSystemRequirements.verifyJvmRequirements(PrestoSystemRequirements.java:44)
at io.prestosql.server.PrestoServer.run(PrestoServer.java:90)
at io.prestosql.$gen.Presto_329____20201120_091206_1.run(Unknown Source)
at io.prestosql.server.PrestoServer.main(PrestoServer.java:72)#2020-11-2009:36tatuthttps://github.com/prestosql/presto/commit/e7eeeedcc9751c022ffb9df648ee5442cd421c32 the offending code seems to have been fixed in presto#2020-11-2021:13jaret@U11SJ6Q0K Hi thank you for reporting this. We are working to ship a new version of Presto and Java to address this. I think I could potentially provide a work around if required, but we're working this weekend to get the version shipped. Please message me if you would like potential instructions for getting around this in the interim#2020-11-2101:46jaretI updated your thread on Ask with the work around and will notify you there as soon as we get a new presto shipped out.#2020-11-2101:46jarethttp://ask.datomic.com/index.php/522/cloud-analytics-presto-server-cant-start#2020-11-2306:52tatutdowngrading worked#2020-11-2304:14onetomOur Datomic Cloud subcription is not showing up on the AWS Marketplace / Manage subscriptions
https://console.aws.amazon.com/marketplace/home?region=ap-southeast-1#/subscriptions
is that expected?
I see other subscriptions though from Container / Machine Image and CloudFormation categories...#2020-11-2306:53tatutthe analytics doesn’t seem to work if db-name contains a dash - character, I’m getting Query 20201123_062629_00002_8pm43 failed: Expected string for :db-name when trying a query, but it works if db name contains only characters#2020-11-2306:55tatutI had a test db that was named project-2020-11-13 with a date and it didn’t work but the db named just project worked fine#2020-11-2312:33joshkhhuh. for what it's worth, i'm running analytics on a db with a dash in its name without an issue#2020-11-2313:40tatutgood to know, maybe it has some other issue, or it was a presto cli problem#2020-11-2309:36ivanaI have relation when order has customet attribute and can access customer via: [?o :order/customer ?c] The same way I can access order if I have query of customers. Also I can use get-else when order has no customer. But how can I filter customers without orders? (not [?o :order/customer ?c]) gives an error
:db.error/insufficient-binding [?o] not bound in not clause: (not-join [?o ?c] [?o :order/customer ?c])
#2020-11-2309:58ivanaSeems that this way works:
(not-join [?c]
[?o :order/customer ?c])
#2020-11-2310:00ivanaAnd also seems that not is a sugar on not-join, allowing not to set bindings explicitly and manually#2020-11-2310:10ivanaHmm, seems that (not [_ :order/customer ?c]) also works#2020-11-2312:41kschltz@U0A6H3MFT it seems to me that issue lies entirely in the binding#2020-11-2313:59henrik[Cloud/Ions] Does all deps have to be under :deps, or can aliases be specified when pushing?#2020-11-2320:53souenzzono, AFIK, you can specify an alias
As I dig (not documented or explained by anyone):
the datomic-cloud instance will download your code, open your deps.edn, take the :deps make and decide which deps it will use (something like, it do not use #tools-deps or at least, not as a simple command line)#2020-11-2413:56henrikAh, too bad. Had a good composable thing going, but I’m going to have to dump it all in :deps then. 🤷#2020-11-2316:14jarethttps://forum.datomic.com/t/cognitect-dev-tools-version-0-9-55-now-available/1697#2020-11-2407:13promesantehi, how could I update the value of an attribute by applying a function on it, like in a Clojure atom, instead of just replacing it with a new value, thanks#2020-11-2408:21danierouxhttps://docs.datomic.com/cloud/transactions/transaction-functions.html is maybe what you are looking for#2020-11-2423:10promesanteunderstood, thanks#2020-11-2409:47Lennart BuitYou could also do [:db/cas …], in which you specify your expected old value, as well as the new one. E.g. “If the account balance was 6 euros, then the account balance should now become 9 euros”. If you expectancy is not met, you’ll get an exception.#2020-11-2409:47promesante@danie thanks for the quick reply ! it seems so, but perhaps a bit too compact for someone new to Datomic like me, I'd need a clearer, complete example, thanks#2020-11-2409:53promesante@lennart.buit thanks for your quick reply ! what I am trying to implement is just apply a function to current value and get that replaced by that function application's output#2020-11-2409:58Lennart BuitYeah, you can do that, just pull the current value, apply your function, and compare-and-swap old to new.#2020-11-2409:58Lennart Buithttps://docs.datomic.com/on-prem/best-practices.html#optimistic-concurrency#2020-11-2423:10promesanteunderstood, thanks#2020-11-2418:32jaretDatomic 1.0.622 On-Prem now available: https://forum.datomic.com/t/datomic-1-0-6222-now-available/1700#2020-11-2419:25tvaughanWhat's the recommended solution, if there is one, for free form text search across multiple attributes? Bonus points for different matching algorithms and weighted results. Has anyone attempted to integrate Datomic with Elasticsearch? Thanks#2020-11-2420:34pyryWell, I'd say Elasticsearch is indeed a good choice, giving you a lot of flexibility for text search across multiple attributes straight out of the box.#2020-11-2420:36pyryThere shouldn't be any issues with using Elasticseach with Datomic if you use the standard practice of storing your data in a real database (eg. Datomic) and only using Elasticsearch as a secondary view to the data.#2020-11-2501:09bhurlow+1 for pyry’s suggestion, keeping an index outside of the main Datomic database#2020-11-2511:10tvaughanThanks. I worked with Postgres and Elasticsearch and this is the approach we took then too.#2020-11-2613:29kschltzWe use datomic cloud + elastic search in a similar scenario described by pyry. It suits us just fine#2020-11-2617:52thumbnailWe use elastic search for the same purpose too. Even embedding the ES query inside datalog, so the queries are from the peer.
This way the datomic client consumers do not need ES integration{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 4")}
#2020-11-2712:25tvaughan@UHJH8MG6S I was hoping to hear something like this. Can you share any more details? An example maybe? #2020-11-2712:34thumbnailBasically we created a 2 (internal) libraries. 1 for monitoring the tx-log, and updating those values into ES.
The other exposes some search functions (uses spandex to perform a query). We put that library on the classpath of our peer server.
Which allows the client to use the functions in our internal library to be used in datalog:
{:query '{:find [?e]
:where [(ourthing/search ?client ?search-term) [?e ...]]
:in [$ ?client ?search-term]}
:args [(d/db ...), {:hosts [""]}, "Hello world!"]}
In order to get this to work we built a second (internal) library which monitors the tx log, and updates ES indexes for any attributes we're interested in searching.#2020-11-2712:35thumbnailMost of this is very much WIP, but as far as I can tell right now it's a very viable solution.#2020-11-2712:46tvaughanNice! Thanks for sharing this! #2021-11-3014:31bhurlowcooL!#2020-11-2422:55bbrinckI’m trying to use dev-local to write a test that involves a datomic database.
I can open the connection and get a database, but when my code is done running, the program waits for about 30-40s before it completes.
Presumably this is because I’ve failed to shut down something related to datomic, but I cannot figure out how to close the connection or otherwise shut down the dev-local database.
(Note: if I never call datomic.client.api/connect, the program shuts down immediately)#2020-11-2422:55bbrinckHere is my code:
#!/bin/sh
#_(
#_DEPS is same format as deps.edn. Multiline is okay.
DEPS='
{:deps {com.datomic/dev-local {:mvn/version "0.9.225"}
}}
'
#_You can put other options here
OPTS='
'
exec clojure $OPTS -Sdeps "$DEPS" "$0" "#2020-11-2422:59favilaIf a clojure process is idle for 60 seconds before shutting down, it’s almost always the agent thread pool#2020-11-2422:59favilatry (shutdown-agents) at the end#2020-11-2422:59bbrinckWorks perfectly, thank you!!!!#2020-11-2516:48favilaI noticed attribute predicates check explicitly retracted values (from user-supplied tx data or tx-fn expansions) but not implicitly retracted values (from cardinality-one attributes with a new assertion). 1) Why check retractions at all? 2) Why this difference? I’m observing this on on-prem 1.0.6165 if it makes a difference.#2020-11-2516:49favilaI would prefer that attribute predicates are never run on retracted values as it makes it much easier to migrate to cleaned-up values. In our case we’re installing string length limits, and would be ok with old values being too long as long as new values are not.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2020-11-2516:50favilaIt also means we need some kind of atomic tx fn to install an attribute predicate safely.#2020-11-2517:01Alex Miller (Clojure team)might be better to log this at https://ask.datomic.com ...#2020-11-2517:10favilaIMO this is a bug report; I’m just being polite 🙂#2020-11-2517:15Alex Miller (Clojure team):)#2020-11-2517:16Alex Miller (Clojure team)standard support channels are best, then ask.datomic as it creates a searchable archived record for the future, and then here is good for conversation but checked less frequently than the above#2020-11-2614:40thumbnailThe datomic REST-docs refer to the client-api for new projects. However how should non-jvm projects proceed?
I'm ultimately trying to access datomic through ruby.#2020-11-2614:40thumbnailhttps://docs.datomic.com/on-prem/rest.html#2020-11-2616:35joshkhmy answer isn't going to be particularly useful to you, but i've been picking apart this working nodejs + datomic cloud library in order to hopefully make a working cljs native version (and also explore a python client). https://github.com/csm/datomic-client-js{:tag :div, :attrs {:class "message-reaction", :title "heart"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("❤️")} " 3")}
#2020-11-2616:38joshkhso far it has been... tedious#2020-11-2617:23thumbnailThat's amazing, i could translate your project to ruby as I only need a tiny subset.
Reverse engineering the api seems like such a interesting approach#2020-11-2621:11joshkhdefinitely! that project isn't mine though; i'm just reverse engineering it to port it to cljs and maybe python (like you might for ruby).#2020-11-2621:13joshkhit's a very useful blueprint and reveals some of the communication layers of Datomic Cloud. a big thumbs up to the author.#2020-11-2621:43thumbnailIt feels like such a step backwards compared to the REST doc though. There was so much potential there (evident by the many languages who have support for it)#2020-11-2623:22joshkhi have to agree with you there. Datomic Cloud was a huge step to making Datomic accessible without the sysadmin overhead, and Ions did the same for deploying Clojure code without the devops. that being said, AWS promotes Lambda as a cloud-wide connective tissue (as does Ions i guess), and i have a laundry list of use cases for language support outside the JVM#2020-11-2623:27joshkhi asked the same question you did a few months ago and someone recommended rolling out my own Ion to proxy queries over an endpoint. it sounds like a fun project but there are probably things to consider, like how to secure it#2020-11-2623:31joshkh... meanwhile over in Neo4j land 😉 https://neo4j.com/docs/http-api/current/#2020-11-2616:29joshkhusing dev-local, is it possible to establish a client using a local binary from import-cloud and then divert all transactions to :mem? in other words, i want to unit test some functions using an established db but not persist d/transactions to it#2020-11-2721:14steveb8n@U0GC1C09L this lib does exactly what you want but not yet for cloud https://github.com/vvvvalvalval/datomock/issues/6#2020-11-2721:22joshkhthanks for sharing. i'm sticking to dev-local as being the "official" solution at the moment, and starting from a cloud db (whether remote or imported locally) is a must. no on-prem for me 🙂#2020-11-2709:24daniel.spanielI am trying to set up a composite tuple index for lets say an account entity and I am doing this ( which is like something straight out of the docs )
{:db/ident :account/company+number
:db/valueType :db.type/tuple
:db/tupleAttrs [:account/company :account/number]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/value
}
where the account/company is a ref and the account/number is a long.
the problem is that the account/number value (that datomic is filling in) is always -> nil.
it seems like this tuple will be filled in by datomic correctly when the attrs are anything BUT ref and something else. if i do keyword and long its fine, but ref and anything else does not work.
is this a known issue ?#2020-11-2712:22souenzzoI tryied to reproduce this issue here
https://gist.github.com/souenzzo/3ba161909006ef08d53ac63a1d622fa2
But I can't understand where you are seeing this nil#2020-11-2712:52daniel.spanieli am going to run this gist in my db and see if i can alter it to show good reproduction of issue{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2020-11-2713:17daniel.spanielthis does work on my db so that was interesting. thanks. i now have to correlate this with my schema and see what is different. Muchos thanks again!#2020-11-2716:39daniel.spanieli found the problem .. we are doing this
{:db/ident :accounting-category/company+number
:db/valueType :db.type/tuple
:db/tupleAttrs [:entity/company :accounting-category/number]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/value
}
#2020-11-2716:40daniel.spanieland this attribute entity/company is shared with other entities.#2020-11-2716:40daniel.spanielseems like datomic trying to make this tuple for any entity with that attirbute and not just accounting-category entity#2020-11-2716:48daniel.spanielis there a way to stop that and only make this constraint tuple when this exact entity is created / edited ?#2020-11-2714:35jcfIs anyone here using AssumeRole/STS to delegate access to sub-accounts in AWS, and successfully connecting to a running Datomic system with the Datomic CLI?
./datomic cloud list-systems --profile example-dev
WARNING: When invoking clojure.main, use -M
Picked up _JAVA_OPTIONS: -Dawt.useSystemAAFontSettings=on -Dswing.aatext=true
Execution error (ExceptionInfo) at datomic.tools.ops.aws/invoke! (aws.clj:83).
AWS Error: Unable to fetch credentials. See log for more details.
Running aws --profile example-dev s3 ls works as expected so I think this might be a problem in the Datomic CLI side of things.
My ~/.aws/config looks good to me:
[profile example]
region=eu-west-2
[profile example-dev]
role_arn=arn:aws:iam::111111111111:role/developer
source_profile=example
Credentials should be inherited from the example profile… the AWS CLI appears to get this right so I think my ~/.aws stuff is kosher.#2020-11-2714:37jcfI vaguely remember a problem with some Datomic tooling and use of assumed roles. The workaround was to juggle your AWS config about so you're not using profiles but this won't work here as I have to STS my way into the sub account. No direct access possible.#2020-11-2714:41jcfI've added the SSH ingress rule to the bastion security group, and I've attached the Datomic Admin policy to the role that gets assumed when you switch into the sub-account.#2020-11-2714:42jcfI don't know what the "log" is that the error refers to. I've seen that in cognitect-labs/aws-api too and didn't know what log was being referred to there either. 🙈#2020-11-3012:39jcfI've created a question over on http://ask.datomic.com: http://ask.datomic.com/index.php/540/connect-running-datomic-system-using-profiles-assumed-roles#2020-11-3017:38jarethttps://forum.datomic.com/t/datomic-cloud-732-8992/1703{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2020-11-3017:38jaret@U11SJ6Q0K This release has the fix for Presto/analytics in it. I'll update in the ask thread, but wanted you to be made aware here 🙂#2020-11-3017:39tatut:thumbsup: thanks for the notification#2020-12-0113:02tatutI have a datomic cloud dev env with solo topology I’d like to upgrade to production (to use HTTP direct), can I just delete the old compute stack and create a new one?#2020-12-0115:34jaret@tatut HTTP direct requires a production topology, but you can upgrade the solo to production topology by updating the solo compute stack https://docs.datomic.com/cloud/operation/howto.html#convert-solo-to-production#2020-12-0115:35jaretnote you cannot convert back to solo once you move to production as that is unsupported.#2020-12-0115:36jaretand you can technically delete solo compute and relaunch prod compute against the same storage system, but upgrade is our documented method.#2020-12-0117:12prncHi, datomic noob here, I have IONS related question and would appreciate some pointers!
I’m on solo topology and would like to expose a ring handler (w/ reitit) through api gateway http api lambda integration.
But the router seems to be having trouble with making correct match when accessed through api gateways default endpoint
(i.e. https://blahblah.execute-api.eu-west-1.amazonaws.com)
(ns nette.system.web.handler
(:require [ring.middleware.keyword-params :refer [wrap-keyword-params]]
[ring.util.response :as resp]
[reitit.ring :as ring]
[reitit.core :as r]
[reitit.ring.middleware.parameters :as parameters]
[datomic.ion.lambda.api-gateway :as apigw]
(defn router []
(ring/router
[["/" {:name ::home-page/home-page-view
:handler (fn [_]
{:status 200
:body "Nette: Root URL"})}]]
{:data {:middleware [parameters/parameters-middleware
wrap-keyword-params]}}))
(defn app []
(ring/ring-handler
(router)
(ring/routes
(ring/create-resource-handler
{:path "/"})
(ring/create-default-handler
{:not-found (fn [req]
{:status 404
:body "Nette: Route Not Found"})}))))
(def ion-app
(apigw/ionize (app)))
Expecting match on “/” w/ response: Nette: Root URL actual: Nette: Route Not Found (default not found handler).#2020-12-0117:12prncSo it seems that something, is getting to the handler, just not what I expect ;)#2020-12-0119:16kennyIf I know I have a large amount of Datomic writes incoming, is there a supported way to preemptively scale up DDB?#2020-12-0120:05csmI imagine you could directly change the read/write throughput on the DynamoDB table. If it’s using autoscaling, you could increase the minimum throughput.#2020-12-0213:36oxalorg (Mitesh)I'm trying to add a child entity with a ref to a parent entity using a lookup-ref
(d/transact conn {:comments/title "foo" :comments/article [:article/title "bar"]})
But I have incomplete data of B. So sometimes there is no article called "bar".
This gives me a :db.error/not-an-entity error. How would I go about writing this query so that if "bar" does not exist then only add the child "foo" but keep it's :child/parent attribute as empty?#2020-12-0217:15pithyless@U013MQC5YKD 3 common ways to go about this:
1. Change the [:article/title "bar"] to {:article/title "bar"}. Assuming :article/title is unique and :comments/article is a to-one relationship, this won't throw the error, but will either use the existing article or create a new article (with just that attribute).
2. If you don't want to create a new (possibly incomplete article), you need to query the db for existence before doing the transact and modify the transaction accordingly.
3. If you're querying the DB to check for existence and then doing a transact, this may lead to a race-condition. If this is important to avoid, you can instead create a custom transaction function and move the check for existence within the transactor. This will guarantee that you don't have a race-condition (your transaction function will check for existence of the article and then transform the transaction data accordingly before committing). {:tag :div, :attrs {:class "message-reaction", :title "tada"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🎉")} " 3")}
#2020-12-0307:31oxalorg (Mitesh)Thank you so much pithyless, this was of great help! I'm going to try these out and see what works for me. 🙂#2020-12-0214:48plexusI'm setting up Datomic Analytics for a client, and am running into an issue with BigDecimal
{:type "java.lang.ArithmeticException",
:message "Rounding necessary",
:suppressed [],
:stack
["java.base/java.math.BigDecimal.commonNeedIncrement(BigDecimal.java:4529)"
"java.base/java.math.BigDecimal.needIncrement(BigDecimal.java:4585)"
"java.base/java.math.BigDecimal.divideAndRound(BigDecimal.java:4493)"
"java.base/java.math.BigDecimal.setScale(BigDecimal.java:2799)"
"java.base/java.math.BigDecimal.setScale(BigDecimal.java:2732)"
"io.prestosql.spi.type.Decimals.encodeScaledValue(Decimals.java:172)"
"io.prestosql.spi.type.Decimals.encodeScaledValue(Decimals.java:166)"
"datomic.presto$create_connector$reify$reify$reify$reify__2435.getSlice(presto.clj:348)"
#2020-12-0214:49plexuscould this be a bug in the connector?#2020-12-0216:17ghadi@plexus I seem to recall the latest release notes having something about that#2020-12-0217:01Michael Stokleyare there diagramming techniques or approaches that are more suited to datomic than traditional sql rectangles - "an entity relationship diagram"?#2020-12-0217:01Michael Stokleylike drawing nodes and named edges, maybe?#2020-12-0217:03Alex Miller (Clojure team)erds are still pretty useful as long as you keep in mind that entity "types" are a fiction of your model, not a constraint in Datomic{:tag :div, :attrs {:class "message-reaction", :title "pray"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙏")} " 3")}
#2020-12-0217:04Alex Miller (Clojure team)for example, here's a Datomic schema Rich made for Codeq https://github.s3.amazonaws.com/downloads/Datomic/codeq/codeq.pdf#2020-12-0217:07Michael Stokleythank you! looks like it leaves off the entity names - which, as you say, are a fiction.#2020-12-0217:07Alex Miller (Clojure team)in any of those rectangles you have a bunch of attributes that are used together (but just keep in mind that the box doesn't really exist). the lines (attributes) are the important part#2020-12-0217:08Alex Miller (Clojure team)I think yellow boxes there indicate uniqueness#2020-12-0217:08Alex Miller (Clojure team)there is one for the mbrainz sample at https://github.com/Datomic/mbrainz-sample#2020-12-0217:09Alex Miller (Clojure team)that one has a stronger sense of entity type (but really is just a shared namespace for the attrs)#2020-12-0217:11Alex Miller (Clojure team)these are both good examples of what Datomic devs do - different variants may emphasize different aspects of the schema (type, cardinality, uniqueness, component)#2020-12-0217:12Michael Stokleythis is very helpful, i appreciate it.#2020-12-0217:13Michael Stokleygood to know that this is the diagramming style favored by other datomic developers.#2020-12-0220:31WojtekI have few really slow datomic queries - how could I improve them?#2020-12-0222:42Lennart BuitThe usual answer would be: do you have the order of your clauses correct#2020-12-0222:42Lennart Buithttps://github.com/Datomic/day-of-datomic/blob/master/tutorial/decomposing_a_query.clj#2020-12-0223:54Wojtekthank you! but I have already tried to reorder my clauses without improvement 😞#2020-12-0308:21plexusI don't suppose there's a way to make a peer server serve up a database that was created after the peer server started? (apart from restarting the peer server)#2020-12-0318:13jaretNo you would have to restart. What is the use case for doing this? Perhaps this is something we should consider adding a feature for. As an aside you can pass multiple -d options to serve multiple databases and serve in-memory dbs with the peer server.#2020-12-0410:48plexusthis is for a multi-tenant system where one tenant = one db. We are setting up analytics and still figuring out what our setup will look like. It's appealing to have a single peer server = single catalog, but then we would have to restart it when adding a tenant.#2020-12-0316:45TwanCan you clone a full Datomic setup by copying the underlying storage over? For example, when copying Postgres dumps & importing them afterwards#2020-12-0318:21jaretAre you talking about on-prem or cloud? In on-prem the supported method would be backup/restore. You can even use backup and restore to move between underlying storages: https://docs.datomic.com/on-prem/backup.html
Please note that Datomic Backup/Restore is not intended as tool for "forking" a DB, but you can restore into a URI that already points to a different point-in-time for the same database. You cannot restore into a URI that points to a different database.#2020-12-0407:45TwanOn-prem, of course 😉#2020-12-0407:51TwanWe did copy the Postgres data with pgdump & psql -f, but now we seem to end up with partial data, with entries that consist of :datomic-replication/source-eid. Is that expected, or are we expecting something that cannot work?#2020-12-0410:54TwanAh, we were shadowing our own databases apparently on both storage and Datomic level. It turns out, this is possible btw 🙂#2020-12-0412:30favilaIt’s possible if your storage backups are atomic, consistent backups (no read-uncommitted or other read anomalies). Not all can (dynamo) or do by default (MySQL?) so just be careful{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2020-12-0317:54jacksonQuestion about on-prem peer capacity.. the docs indicate that 4GB memory is recommended for production. If we have a beefy server that has plenty of ram, is there benefit to scaling everything up? Accounting for other processes being run and 64-bit java, etc.#2020-12-0318:00favilabenefit exists in increasing peer object cache up to the size of the peer’s working set (or the database); you can also run queries with larger intermediate result sets (which must always fit in memory). No benefit beyond these. Risk of large heap is the usual with java CMS or G1GC: longer pauses. If you’re using a fancy new pauseless collector this should also be a non-issue.#2020-12-0318:03jacksonOur db is quite large and we've already started rewriting some of our heavier queries with datoms but having a larger cache in the peer should mean fewer trips to pull indices (hopefully). Does the transactor's memory need to increase to match the peers?#2020-12-0318:05favilano; you should size the transactor based on its own write and query load, not peers{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2020-12-0318:05favilai.e., treat it like a peer#2020-12-0318:06favilajust to transact things it does have to perform some queries (e.g. to enforce uniqueness or cardinality, or running db/ensure predicates, or transaction functions)#2020-12-0318:06favilabut you can judge that load separately from the number of other peers#2020-12-0318:07jacksonok awesome, thanks for the help!#2020-12-0417:08jcfHi all! I'm swapping out on-disk storage with dev-local for an in-memory database (I don't need durability for CI/tests) and I've followed the docs here by adding :storage-dir :mem to my client config…
https://docs.datomic.com/cloud/dev-local.html#memdb
That call is blowing up with this trace however:
1. Caused by java.lang.IllegalArgumentException
No implementation of method: :as-file of protocol:
#' found for class: clojure.lang.Keyword
core_deftype.clj: 583 clojure.core/-cache-protocol-fn
io.clj: 35
io.clj: 424
io.clj: 418
impl.clj: 331 datomic.dev-local.impl/require-storage-dir!
impl.clj: 328 datomic.dev-local.impl/require-storage-dir!
impl.clj: 340 datomic.dev-local.impl/create-client
impl.clj: 337 datomic.dev-local.impl/create-client
impl.clj: 373 datomic.dev-local.impl.DevLocal/fn
Atom.java: 37 clojure.lang.Atom/swap
core.clj: 2352 clojure.core/swap!
core.clj: 2345 clojure.core/swap!
impl.clj: 361 datomic.dev-local.impl.DevLocal/_impl_configure_system
impl.clj: 433 datomic.dev-local.impl/ensure-client
impl.clj: 423 datomic.dev-local.impl/ensure-client
Var.java: 384 clojure.lang.Var/invoke
impl.clj: 24 datomic.client.api.impl/dynarun
impl.clj: 21 datomic.client.api.impl/dynarun
impl.clj: 31 datomic.client.api.impl/dynacall
impl.clj: 28 datomic.client.api.impl/dynacall
api.clj: 100 datomic.client.api/client
api.clj: 48 datomic.client.api/client
That looks to me like I should have a :storage-dir that's resolvable to a file but the docs say this keyword :mem is supported. Given it's gone 5pm here and it's been a long week I'm guessing this is me missing something obvious but it's not jumping out at me…
I'll try jacking in and see if I can jump to the source to see how this works. Clojure is so awesome! 😄#2020-12-0417:09jcfMy client config looks like this:
{:server-type :dev-local
:storage-dir :mem
:system "ci"}#2020-12-0417:24kennyCan you provide the code you're using to get this stacktrace?#2020-12-0417:26jcf@U083D6HK9 it's the call to datomic.client.api/client in this component:
(defrecord Datomic [client-config conn db-name]
component/Lifecycle
(start [c]
(let [client (d/client client-config)
_ (d/create-database client {:db-name db-name})
conn (d/connect client {:db-name db-name})
tx-data (schema c)]
(d/transact conn {:tx-data tx-data})
(assoc c :client client :conn conn)))
(stop [c]
#_(some-> c :client (d/delete-database {:db-name db-name}))
(dl/release-db client-config)
(dissoc c :client :conn)))
#2020-12-0417:26jcfThe next line in the trace would be the (let [client (d/client ... line above.#2020-12-0417:27kennyThanks. What version of dev-local are you running?#2020-12-0417:29jcfI've tapped the client-config just to be safe, and can see this in REBL:#2020-12-0417:30jcfVersions coming right up! 🙂#2020-12-0417:30jcfcom.datomic/dev-local {:mvn/version "0.9.203"}#2020-12-0417:31kennyCan you try updating to the latest 0.9.229? Also ensure you're on the latest client version 0.8.102.#2020-12-0417:31jcfWill do!#2020-12-0417:32jcfClient is up to date. Bumping dev-local now and restart my JVM.#2020-12-0417:41jcfProgress! Looks like updating the dev-local dep has gotten me the :mem support I need. Now I just need to pass in :system and :db-name to release-db.
I wonder if I can merge the client config and the arg-map passed to release-db et al… :thinking_face:#2020-12-0417:41jcfProbably not a good idea.#2020-12-0417:42kennyI could be mistaken but I believe release-db has always needed a :db-name.#2020-12-0417:42kennyIf you'd like to release all dbs, you can call d/list-databases and loop over that.#2020-12-0417:44jcfMakes sense. Thanks for the pointers, @U083D6HK9!#2020-12-0417:45jcfI wonder if there's a footnote to add to spec-ulation about acretion not helping people like me with old code and newer docs. 😄#2020-12-0719:12Michael Stokleywhen writing a ref attribute, how can we indicate the type? or is that deliberately omitted because datomic (and perhaps clojure) doesn't make use of types, only attributes? in some codebases, i see idents out there called "schema namespaces", and the attr would indicate it is associated with that schema (as would the refed entity, perhaps). or is it best to have the attr namespaced to match the refed entity?#2020-12-0719:14Michael Stokleyi wonder if that last option - matching the attr namespace to the namespace of the ref'ed entity - is always suitable. for example, you might have a :person/child attr ref to another :person entity. then the namespaces wouldn't match.#2020-12-0719:24favilaentities are untyped (they’re just an ID to join facts together), so refs are also untyped#2020-12-0719:25favilae.g. what makes an entity a “person”?#2020-12-0719:27favilaYou could layer a type system on top. a common approach is to add additional attributes to the ref attribute itself that indicates (human or machine-readably) the range (such as type) may have#2020-12-0719:27favilaa more recent feature is to use a pair of entity specs with db/ensure: https://docs.datomic.com/on-prem/schema.html#entity-specs#2020-12-0719:28favilaspec the referent, spec the referred, and add an entity predicate to the referent that asserts that the referred conforms; then add a :db/ensure when you transact changes to the referrer#2020-12-0721:56Raymond KoIs there a canonical direction in modeling parent-child relationships in datomic schemas? For example, consider a schema which represents books and their respective chapters. I see two ways.
1. have a :book/id , :chapter/id and :book/chapter where book -> chapter. In order to delete the chapter you have retract the chapter entity and one of the (just realized refs solve this problem, ignore this).
2. have a relationship where chapter -> book. like :chapter/book. This is a db.cardinality/oneand has the benefit of only needing to retract the entity to delete. My main issue of this is that it seems reversed and for more complicated cases, it is not always clear it is like :child/parent especially when there is domain specific terminology, Is there a standard convention like :chapter/book-parentor :chapter/parent-bookto denote attributes of this type?#2020-12-0722:05benoitOne difference is that you can use :db/isComponent on :book/chapter but probably not on :chapter/book.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 6")}
#2020-12-0722:06Lennart BuitWas just about to say, but don’t you get a many-to-many relation from that#2020-12-0722:08benoitI'm not sure what you mean. https://docs.datomic.com/cloud/schema/schema-reference.html#db-iscomponent#2020-12-0722:10Lennart Buit#2020-12-0722:13benoitI think you might be right though. The entity API might return a set of "one" book.#2020-12-0722:14Raymond KoThanks, I did not now about :db/isComponentand considering my own project nests further with other types and datascript supports this, it seems like there is only one way to go if I want easy deletions.#2020-12-0723:37favila@U963A21SL @UDF11HLKC isComponent also makes the “reverse” direction cardinality-one{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2020-12-0723:37favila(even if there are multiple referents!)#2020-12-0723:40joshkhi agree that isComponent is the best solution here. if in the future you find that components might not fit your data model, you could consider writing transaction functions that handle data cleanup from a business logic perspective.. "when this book is removed from the library, retract chapters and user highlights about those chapters and any reservations for the book". i use a combination of tx fns and a "retraction api" that cleans up the messy parts of the graph, like related unique tuple constraints that no longer make sense when parts of their value are retracted (such as book+chapter+critic+rating -> nil+nil+critic+rating)#2020-12-0723:45joshkhi've often been tempted to add "reverse component references" between entities just for the sake of data cleanup, but after playing with the idea it felt wrong. components are great.. just saying that they might only get you so far before a tx fn becomes a better option 🙂#2020-12-0723:53Lennart BuitAh yeah @U09R86PA4, I started doubting indeed ^^, testing it in the repl confirms you are right. Thanks!#2020-12-0808:23ChristosHello, I have installed Datomic dev-local, and try to run a simple query on the “Movies” db but I get the error: :db.error/not-an-entity Unable to resolve entity: :release/name. Any ideas? Many thanks in advance#2020-12-0808:25jcfHave you transacted the “Movies” schema? If so, can you share the query you’re running?#2020-12-0808:51ChristosI have not transacted the “Movies” schema! I will do that and retry :-)#2020-12-0810:49Christos@U06FTAZV3: I have created a simple schema
(def sch [{:db/id "1"
:db/ident :name
:db/cardinality :db.cardinality/one
:db/valueType :db.type/string
:db/doc "The name"}
])
and transacted it:
(d/transact conn {:tx-data sch})
then I add something:
(d/transact conn {:tx-data [{:db/id "1"
:name "Christos"
}]})
and then I query:
(d/q {:query '[:find ?e ?name
:where [?e :name ?name]]
:args [db]})
and get the error:
Execution error (ExceptionInfo) at datomic.core.error/raise (error.clj:55).
:db.error/not-an-entity Unable to resolve entity: :name#2020-12-0813:46favilaIs that db object from after your transactions?#2020-12-0814:10ChristosNo, it is not! Many thanks…:-)#2020-12-0814:10ChristosIt works now 🙂{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2020-12-0900:26mruzekwHi all, I'm looking into Datomic for a new project of mine that might have live-collaborative editing. I noticed that Datomic Cloud doesn't allow use of the tx-report-queue to broadcast updates. Are there any alternative solutions for Datomic Cloud for realtime updates? Thanks!#2020-12-0901:22Saurabh SharanI think you should be able to use tx-range, but you’ll have to manage the cursor into the transaction log yourself https://docs.datomic.com/cloud/time/log.html#2020-12-0919:00Sam DeSotaHey, I'm running into a really strange issue with datomic boot datoms. A member on my team accidentally transacted to an entity with :db/id 3 , but that entity refers to a "boot" datom. I think datomic should have rejected the transaction, but unfortuntely it allowed it:
{:t 1095565,
:data
[#datom[13194140628877 50 #inst "2020-12-09T18:09:14.299-00:00" 13194140628877 true]
#datom[3 95 :order.status/fulfillManual 13194140628877 true]]}
Now, It's impossible to remove this datom. When trying to retract, I get:
:db.error/datom-cannot-be-altered Boot datoms cannot be altered: ...
We have many queries that depend on this attribute having valid data, that are now broken. Is there anyway to retract this mistake?#2020-12-0919:25jaretHi @USQ5P4FT4 Is this dev-local, on-prem, cloud db? Also how important is this database?#2020-12-0919:32jaretYou can transact your own facts about system entities (emphasis entities, not datoms). You cannot retract such facts.
That being said, I need to create a reproduction of this issue and I imagine that a query workaround is to have queries ignore entities in the system partition (partition 0).
Are you seeing any errors? The entity that was altered pointed to:#2020-12-0919:32jaret#:db{:id 3,
:ident :db.part/tx,
:doc "Partition used to store data about transactions. Transaction data always includes a :db/txInstant which is the transaction's timestamp, and can be extended to store other information at transaction granularity."}#2020-12-0919:33jaretI guess I know you are saying you have queries that depend on that attribute having valid data, but I want to understand if you are seeing errors outside of query data validation.#2020-12-0919:34jaretIf you get a chance shoot an email to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> so I can track this issue with a ticket and we can share relevant details.#2020-12-0919:35Sam DeSotaHi Jaret. This is with on-prem. It's a mission critical database. Needed to fix this, so what I did is I created a new entity, copied the old entity data to the new entity, then renamed the entity to the old entity name. Of course this will mean historical queries will be weird, but had to be fixed.
Will send an email to support with details to create a ticket, thank you!#2020-12-0919:47ChicãoHi, I'm trying to select user by name and email, but if email and name are empty return all users
This is my query but works only when I have email and username
(->> (d/q '{:find [[(pull ?user [:user/id
:user/name
:user/email])
...]]
:in [$ ?name ?email]
:where [[?user :user/consumer]
[?user :user/email ?email]
[?user :user/name ?name]
[(util/includes? ?name ?email)]]}
db (or username "") (or email "")))
May someone can help me?#2020-12-0920:01Lennart BuitI’m not sure I’m understanding what you are saying, but you can generate query maps programatically, its just a map, after all — say if ?name is not provided, not generate the [?user :user/name ?name] clause#2020-12-0920:03ChicãoIf the name and email was not provide return all entity#2020-12-1007:48tatutIn datomic cloud is it possible to use with-db in transaction function? It requires a connection but the tx fn is only passed a db value.#2020-12-1106:40onetommaybe you can call datomic.client.api/with on that db value you receive in the transaction function?
I would be curious to know whether it works or not.#2020-12-1516:59kennyfyi, I just ran into this too 🙂 Created a question on ask.datomic https://ask.datomic.com/index.php/557/can-you-use-d-with-inside-a-cloud-transaction-function.#2020-12-1019:23tvaughanIs it possible to use an attribute predicate on a ref? If so, what would be passed to the predicate function? Our desire is to limit the reference based on a value of an attribute in the referenced entity. I think the answer is no, but we're not reading the documentation the same. We'd really appreciate a clarification, thanks#2020-12-1020:04favilaYou will get an entity ID and no database, so no, you cannot use an attr predicate for this#2020-12-1020:05favilayou need :db/ensure to enforce something like this safely and atomically#2020-12-1020:06tvaughanGreat. Thank you @U09R86PA4#2020-12-1020:40Dave@U09R86PA4, what if the entity ID returned consists of a single attribute/value pair with :db.unique/value, i.e. the related schema was purposely designed to have only one attribute with a unique value, would that change your answer in any way?#2020-12-1020:42favilaI’m not sure what you mean? entity id returned from what?#2020-12-1020:44DaveMaybe I'm misinterpreting what you meant by
> You will get an entity ID#2020-12-1020:46DaveCan you elaborate?#2020-12-1020:47DaveI took it as the answer to @U0P7ZBZCK question below.
> If so, what would be passed to the predicate function?#2020-12-1020:50favilaThe contract for attribute predicates is (fn [v] ) => true (to allow) | anything-else (to reject), where v is the value that will be asserted/retracted. For refs, that value is a long representing an entity id#2020-12-1020:52favilaattribute predicates must decide to accept or reject based only on the value. without a database it can’t learn anything about what is asserted for the entity#2020-12-1020:58DaveThanks @U09R86PA4.#2020-12-1105:56onetomdo i understand well, that transaction functions can't be deployed independently into various stages within 1 datomic cloud system?
is it not an issue in practice?#2020-12-1217:42Huahaihow to do something equivalent to SQARQL OPTIONAL in datomic?#2020-12-1219:04favila[(get-else $ ?e ?attr ?non-nil-sentinel-value) ?captured-value]#2020-12-1219:05favilaor (or-join [?e ?v] [?e ?a ?v] (and (not [?e ?a]) [(ground "non-nil-sentiel") ?v)) for more complex cases#2020-12-1219:05favilaor use rules and make sure one implementation always matches and one never does#2020-12-1219:05favila(for any given thing you are checking)#2020-12-1219:06favilanote unfortunately you can’t safely have nil values in your intermediate result sets so you usually have to replace it with some non-nil sentinel value#2020-12-1219:07favilanamespaced-keywords work well IME#2020-12-1219:07favilaand then postprocess the query result back to nil if you need it#2020-12-1303:07Huahaithx#2020-12-1419:39jarethttps://forum.datomic.com/t/now-hiring-datomic-technical-liaison/1718#2020-12-1419:56Cameron Kingsburyis there a way to set/intersectiona collection of sets in datomic without using ions?#2020-12-1419:57Cameron Kingsburyessentially I have a subquery that is returning a list of sets that I want to get the intersection of before returning in the top level query#2020-12-1516:56kennyCan you use d/with inside a Cloud transaction function? It's not clear from the https://docs.datomic.com/cloud/transactions/transaction-functions.html if the db passed to the tx fn is one created by d/with-db.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2020-12-1516:58kennyHaha, looks like others are interested in this too: https://clojurians.slack.com/archives/C03RZMDSH/p1607586505104400#2020-12-1516:58kennyAdded a question here: https://ask.datomic.com/index.php/557/can-you-use-d-with-inside-a-cloud-transaction-function{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2020-12-1618:31msshey there, trying to formulate a query and wondering how best to express it:
(d/q '[:find ?tuple
:where [?tuple :tuple-ns/attr1+attr2+attr3 [tuple-attr-1 tuple-attr-2 tuple-attr-3]]]
db)
I’m trying to query for a tuple that matches dynamic attrs. one attr could be an ident, though, and I can’t seem to formulate the query with the ident itself. e.g. in the query above, if tuple-attr-1 was an ident, I would need to query with the eid for the ident as opposed to the ident itself. is there any way around that?#2020-12-1618:42favilad/entid to normalize input to eids before issuing the query. if that’s not possible, consider writing a rule to do the same thing.#2020-12-1705:34steveb8nQ: where does Ion push/deploy put the jars it downloads? I want to cache these to speed up my CI deploys. I already have deps.edn jars caching so it must be somewhere else#2020-12-1722:29steveb8n@U1QJACBUM is this info available somewhere? Or should I ask in the Datomic forum instead?#2020-12-1705:45onetomAre there any plans to support a multi-region Datomic Cloud system?
I was just wondering if https://aws.amazon.com/dynamodb/global-tables/ would make it possible at all, or not, since they are only eventually consistent?#2020-12-1709:23onetomThe Datomic Cloud pricing page (https://aws.amazon.com/marketplace/pp/prodview-otb76awcrb7aa#pdp-pricing) shows a lot of extra instance types (`{{t2,t3}.{small,medium},m5.large}`) with prices for the Production topology, but the CloudFormation templates from the releases page only allow i3.{x,}large.
Is that a mistake on or restriction of the AWS Marketplace solution, or is it some upcoming feature (to allow smaller instance types)?#2020-12-1716:13jaretThe marketplace listing page is it's own thing managed by marketplace and there are often mistakes in the dropdowns. I'll report this over to them, but its possible they reviewed the QA template instance sizes, Production template instance sizes, bastion, and Solo template instance sizes and combined them. We have our supported instance sizes documented here https://docs.datomic.com/cloud/operation/planning.html#instance-sizes#2020-12-1716:14jaretThanks for catching this by the way!#2020-12-1709:24onetomI'm talking about the https://s3.amazonaws.com/datomic-cloud-1/cft/732-8992/datomic-production-compute-732-8992.json template for example#2020-12-1711:27Petrus TheronWhat’s the smallest EC2 instance I can choose to play around with Datomic Cloud in development using AWS Marketplace? i3.large is the vendor-recommended size, but that’s gonna cost me $130/month.#2020-12-1711:39danierouxAre you using the solo topology?#2020-12-1711:54Petrus Theron@U9E8C7QRJ, yes.#2020-12-1712:54onetomit feels like that marketplace web interface is buggy or limited and doesn't allow providing different instance type list based on whether the "Fulfillment Option" is "Solo" or "Production".
if you look into the separately released cloudformation template for the solo compute stack (https://s3.amazonaws.com/datomic-cloud-1/cft/732-8992/datomic-solo-compute-732-8992.json) and look for AWS::AutoScaling::LaunchConfigurations, then you can see that the TxLaunchConfig defines the InstanceType as "Fn::FindInMap": ["Datomic", "defaults", "InstanceType"], which is hardwired in the Mappings section of the template to be t3.small and not configurable via template parameters.#2020-12-1712:57onetomI remember vaguely that an AMI-type AWS Marketplace product can only have 1 CloudFormation configuration, that's why Cognitect had to use a root template and nest the storage and solo-compute templates below it.
Maybe because of this restriction, the templates being used when you provision a system using the AWS Marketplace web interface, is slightly different from the one published on the https://docs.datomic.com/cloud/releases.html page....#2020-12-1712:57onetom@U051SPP9Z ^^^#2020-12-1712:58Petrus TheronThanks. They claim $1/day Solo topology, so trying to figure out how to get cost down to that.#2020-12-1713:03onetomjust pick the t3.small instance type for the Datomic Cloud option and t3.nano for the Utility Bastion#2020-12-1713:05onetomand i can attest that the claim is true, the solo setup indeed costs only about 30 USD a month (depending on the region of course) + whatever u pay for the amount of data u use, but initially that's virtually zero.#2020-12-1714:09benoitJust making sure you know about this https://docs.datomic.com/cloud/dev-local.html#2020-12-2108:21Petrus TheronDatomic Cloud pricing is soo confusing. It shows something different on the left from the right, and the actual charges don’t seem to match the EC2 instance sizes on the right?#2020-12-2112:37danierouxIt switched from Solo to Production in the two screens 😊#2020-12-2108:21Petrus TheronDatomic Cloud pricing is soo confusing. It shows something different on the left from the right, and the actual charges don’t seem to match the EC2 instance sizes on the right?#2020-12-2013:36Kurt SysJust wondering, what exactly is the difference between :db/ident and :db/id in Datomic? Is it like: db/id is the 'internal id', and :db/ident is an 'external id' (which should be something that makes sense to humans)?#2020-12-2014:28favilaDatomic’s data model is assertion/retraction of facts, represented as datoms#2020-12-2014:29favilaDatoms look like [eid attrid value txid op] where op is true or false for assertion or retraction#2020-12-2014:31favilaThat’s what’s actually in the db. :db/id in a map projection is representing the eid that is common to all the datoms projected into the map#2020-12-2014:32Kurt Sysyeah, that's how far I got right now... But we can use a :db/ident instead of :db/id , with :db/ident a more readable id?#2020-12-2014:32Kurt Sys(e.g. when querying)#2020-12-2014:33favilaDo you know what a lookup ref is?#2020-12-2014:33Kurt Sysyeah.#2020-12-2014:33favilaYou can think of idents as fundamentally lookup refs#2020-12-2014:34favilaThe attr is implied (it’s db/ident) and the indexing is special (they are kept in peer memory and they ignore retractions)#2020-12-2014:35favilaBut it’s still looking up an eid by the value of one of its datoms#2020-12-2014:35Kurt Sysright! I bit like: [:db/id <some ident>] lookup ref?#2020-12-2014:35Kurt Sysno, not entirely. sorry.#2020-12-2014:35Kurt Sys[:db/ident <some ident>] resolves to a unique :db/id#2020-12-2014:36favilaCorrect, but it doesn’t have the special properties of a raw lookup#2020-12-2014:36favilaSorry, bare keyword syntax#2020-12-2014:37Kurt Sysok, cool thx. It's only just another attribute, which happens to be unique?#2020-12-2014:38favilaYes, but it has special lookup syntax and uses special indexes so it’s faster and can be looked up even after retraction#2020-12-2014:38favilaBut fundamentally it’s a value lookup#2020-12-2014:38Kurt Sysallright. I get it. Thx!#2020-12-2014:39favilaI explained ident in terms of lookup ref, but historical note idents predate lookup refs by quite a bit#2020-12-2014:39favilaFor quite a while datomic did not have lookup refs#2020-12-2014:39Kurt Sys🙂 - ok, well, in any case, it's pretty clear now.#2020-12-2014:39Kurt Systhx.#2020-12-2014:40favilaThe special indexing is also why you should be careful about creating too many idents#2020-12-2014:41Kurt Sysoh, ok... well, 'too many' seems a bit vague to me, but I guess for most systems, this shouldn't be a big deal?#2020-12-2014:45favilaIf you keep it to schema-level, crafted-by-hand assertions that should be fine. It just shouldn’t be asserted on data#2020-12-2014:45favila(Hundreds of thousands or millions, on things that may be retracted)#2021-12-2915:00kennytiltonHey, @U09R86PA4. I am a Datomic noob myself, so caveat lector, but this same ident vs id threw me as well. I did some digging/learning and came up with this epiphany: Datomic is a symbolic database, just as Lisp is a symboic language. The :db/ident attribute is how "symbols" are created. Importantly, these ident/symbols are the only things guaranteed by a future Datomic export/import mechanism. I had thought :db/ids would be that, but no, and that makes sense if idents/symbols have object identity. And <gasp> this is why we do not want to make "too many", just as in Lisp we are careful about loading up the symbol space. Final tip: one fun thing to do is create a new database and then examine the contents. We see Datomic is also a self-hosted DB, creating the primordial idents over a sequence of early transactions, including the ident :db/ident itself. Fun stuff!#2020-12-2120:53msshow are people handling migration of heterogenous and homogenous tuples given that tuple attributes can’t be altered?#2020-12-2210:39onetomi'm still having an issue with deleting compute or query groups of a datomic cloud system (version 732-8992)
the stack hangs on deleting the LambdaSecurityGroup, because it's still attached to ENIs.
deleting DatomicLambdaRole fails with:
Cannot delete entity, must delete policies first. (Service: AmazonIdentityManagement; Status Code: 409; Error Code: DeleteConflict; Request ID: bfac688c-d62e-4d52-82ed-625c81837144; Proxy: null)
and DeleteDatomicLambdaEnis fails with:
Failed to delete resource. See the details in CloudWatch Log Stream: 2020/12/22/[$LATEST]d78708ea11a4499193ea28600b96feab
but i couldn't find that log stream, so not sure what does it say.
i've asked about it roughly a year ago, but didn't seem to get an answer:
https://clojurians-log.clojureverse.org/datomic/2019-12-14
does anyone have any suggestions to prevent this happening?#2020-12-2218:42PBAnybody know if the presto connector can do history queries?{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 3")}
#2020-12-2221:22kennyI'm running lots of transactions into a Datomic Solo topology. In the CW logs, I'm seeing an alert logged every few seconds that looks something like the below. Any idea why this would be occurring?#2020-12-2317:45Joe LaneHey @U083D6HK9, the error here is saying the Indexer was unable to complete a DDB operation to read a newly created index from DDB. I have a hunch that you've saturated your DDB provisioned operations for a Solo Topology.#2020-12-2320:14kennyWould I be able to see that in the DDB metrics?#2020-12-2322:26kennyWhat is the default timeout for Datomic client API calls?#2020-12-2323:48Joe Lane1 minute, used to be shorter. #2020-12-2323:49kennyHow was 1 minute chosen?#2020-12-2400:04Joe LaneNot sure how it was chosen but it is configurable to less or more if you desire.
Why?#2020-12-2400:04kennyJust curious. Primarily because you said it used to be shorter.#2020-12-2400:05kennyIt also seems important to add to the client API doc string. Couldn't find the info anywhere 🙂#2020-12-2400:07Joe Lanehttps://docs.datomic.com/cloud/client/client-api.html#timeouts#2020-12-2400:07kennyI meant in the code#2020-12-2400:07Joe LanePoint taken though :)#2020-12-2400:09kenny:limit specifies its default in the docstring#2020-12-2400:12kennyfyi, https://ask.datomic.com/index.php/563/add-default-timeout-to-client-api-ns-docstring#2021-12-2809:57WojtekI was trying to run datomic on AWS but when I want to create-database I get an error:
Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:79).
:db.error/invalid-ddb-region Not a supported dynamodb region: me-south-1
Any idea how to fix this?
AWS supports dynamodb in me-south-1: https://docs.aws.amazon.com/general/latest/gr/ddb.html#2021-12-2814:20favilawhich datomic is this? cloud or on-prem? Cloud is only supported on certain regions (this isn’t one of them, list here https://docs.datomic.com/cloud/operation/account-setup.html#regions). On-prem might just have an out of date sdk that doesn’t know about this region?#2021-12-2823:22Wojtekon-prem, datomic-pro-1.0.6222#2021-01-0413:25jaret@U017D57461Z how are you creating the DB and what docs are you following? Can you share the command you are using for on-prem? Are you using client or peer api? For on-prem you need to roll your own CFT. Have you done so? As a development connivence we provide an example template through our create-cf-template script. https://docs.datomic.com/on-prem/aws.html
However you also have to configure storage:
https://docs.datomic.com/on-prem/storage.html#provisioning-dynamo#2021-12-2822:32thumbnailHey! I'm trying out Datomic Analytics Support, I don't know where to put the metaschema .edn-files. They do not seem to be picked up when placed in ./presto-server/etc/x.edn on my datomic-peer installation, should they be somewhere else?
Configuration / catalog is all setup, and show tables shows the default db__attrs and db__idents columns. My metaschema.edn is also correct, see thread.#2021-12-2822:33thumbnailVery simple metaschema:
{:tables
{:person/id {}}
And I verified that (d/q '[:find (count ?e) :where [?e :person/id]] db) returns non-zero#2021-12-2908:42thumbnailhttps://docs.datomic.com/on-prem/analytics/analytics-configuring.html#configuring-metaschema
describes etc-path/datomic, where etc-path is announced by presto as the Etc directory: (no surprises there).
so in my case; opt/datomic-pro-1.0.6202/presto-server/etc/datomic/x.edn worked. :thumbsup::skin-tone-2:#2021-12-3010:46onetomDoes anyone have an example setup for CircleCI obtaining Datomic dev-local from the new dev-tools package?
I'm wondering what's the simplest way to provide the credentials for the cognitect-dev-tools maven repo. Do I need to provide an example settings.xml which I copy to ~/.m2/settings.xml explicitly from the .circleci/config.yml job definition?
That much I figured out that the settings.xml can contain references to the environment, so I can work with a static file, like:
<settings xmlns=""
xmlns:xsi=""
xsi:schemaLocation="
">
<servers>
<server>
<id>cognitect-dev-tools</id>
<username>${env.COGNITECT_DEV_TOOLS_MVN_USER}</username>
<password>${env.COGNITECT_DEV_TOOLS_MVN_PWD}</password>
</server>
</servers>
</settings>#2021-12-3010:49onetomaccording to this old ticket, it's not currently possible to point Clojure CLI tools to a different maven user settings file:
https://clojure.atlassian.net/browse/TDEPS-99
which is otherwise possible to do directly with the mvn command line, using the -s / --settings option.#2021-01-0512:42jcf@U086D6TBN this is how I've solved this issue previously. I set secret vars in CircleCI and pull those in via the XML file you've described.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-01-0512:43jcfIf you're using GitHub actions there's a Maven action that will do the settings.xml dance for you based on strings in YAML rather than XML. 🤷#2021-01-0616:19onetomthanks for the reassurance!{:tag :div, :attrs {:class "message-reaction", :title "bow"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙇")} " 3")}
#2021-12-3020:34kennyWhy does qseq appear to not take :timeout?#2021-12-3105:06onetomit might have something to do with its laziness?#2021-01-0110:14Francesco SardoHappy new year everyone 👋 I was getting back into Datomic Cloud and reading the docs I can't understand how the actual clojure ions are executed. Is the clojure ion sitting on a EC2 instance and the "ultimate lambda" is calling it directly? How are they scaled exactly?#2021-01-0512:40jcfHello hello!
I have a call to (datomic.ion.cast/initialize-redirect :stdout) in a code base I'm working on but I see no log output from calls to cast/dev. It looks like someone has already asked the question over on http://ask.datomic.com: https://ask.datomic.com/index.php/556/how-do-i-use-datomic-ion-cast-to-log-on-my-development-machine
Has anyone else encountered this issue? Is anyone seeing log output from calls to cast/* ?#2021-01-0512:45jcfI need to take the dog for a walk. Be back in an hour or so. 🚶#2021-01-0512:45ChicãoHi, may someone can help me? I've try to deploy datomic to aws ec2 and when I've starting the transactor i've got this error:
Caused by: com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain: [EnvironmentVariableCredentialsProvid
and i've configure aws cli on ec2 and export AWS_ACCESS_KEY_ID and AWS_SECRET.
someone have any idea what could be?#2021-01-0512:45jcfThe env vars should be AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY if I remember correctly.#2021-01-0512:46jcfThat said, you'll likely be told you shouldn't use env vars for authenticating EC2 instances. AWS handles that for you via IAM roles etc.#2021-01-0512:46jcfhttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#ec2-instance-profile#2021-01-0512:53ChicãoOh, thank I'll try configure IAM roles#2021-01-0512:58jcfThe environment variables should work mind. Could be good enough to get things going depending on what it is you’re building.
Good luck, @UL618PRQ9!{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-01-0513:22tatutsomething weird in datomic cloud vs dev-local string length limits https://ask.datomic.com/index.php/567/string-length-limit-cloud-vs-dev-local (both are purportedly 4k chars), but we have over 4k in some environments and ran into problem when importing it to dev-local#2021-01-0514:33favilaRegardless of the “real” limit, operationally I suggest never, ever installing a string-typed attribute on any datomic (onprem, cloud, local) without an attribute predicate that limits its length to at least 4k but probably shorter. (same with other values which contain strings, like tuples, symbols, keywords.) Datomic’s dirnode/segment storage structure doesn’t work well with single large values because all values are completely inlined. Large segments can’t be cached by memcached (> 1MB), you may even hit size limits of the underlying storage, essentially breaking your database. IO cost is harder to predict, and your object cache becomes bloated with “nearby” large values that you may not be using.#2021-01-0514:35favilaPut large values into something else and store a reference in datomic#2021-01-0706:14tatutit worries me that this is left to the application developer, I would expect the database tx would throw exception if I try to put in db breaking things in it#2021-01-0706:15tatutand that this isn’t documented with suitably scary disclaimers “you need to check this length yourself, or your database might break”#2021-01-0713:51favilaI agree. I’m in the midst of a painful project to backfill this into a large database#2021-01-1103:34jacklombardApologies for the beginner question and for the cross post from the beginners channel (was asked to post here).
I'm having trouble doing a basic query using the peer library, I have tried both the dev (with the dev transactor running) and mem protocols.
(comment
(def db-uri "datomic:)
(def db-uri "datomic:)
(d/create-database db-uri)
(def conn (d/connect db-uri))
(def db (d/db conn))
(def movie-schema [{:db/ident :movie/title
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/doc "The title of the movie"}
{:db/ident :movie/genre
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/doc "The genre of the movie"}
{:db/ident :movie/release-year
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/doc "The year the movie was released in theaters"}])
(def first-movies [{:movie/title "Explorers"
:movie/genre "adventure/comedy/family"
:movie/release-year 1985}
{:movie/title "Demolition Man"
:movie/genre "action/sci-fi/thriller"
:movie/release-year 1993}
{:movie/title "Johnny Mnemonic"
:movie/genre "cyber-punk/action"
:movie/release-year 1995}
{:movie/title "Toy Story"
:movie/genre "animation/adventure"
:movie/release-year 1995}])
(d/transact conn movie-schema)
(d/transact conn first-movies)
(def all-movies-q '[:find ?e
:where [?m :movie/title ?e]])
(d/q all-movies-q db))
This is the error when I run (d/q all-movies-q db)
Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:79).
:db.error/not-an-entity Unable to resolve entity: :movie/title
#2021-01-1103:35jacklombardTried derefing the both the movie-schema and first-movies transactions so that it waits for it to (complete?), still the same error. Guessing the movie schema is not being persisted?#2021-01-1103:47cjmurphyThe db you are using stale. After doing any transacts a new db needs to be grabbed.{:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 4")}
#2021-01-1106:42oxalorg (Mitesh)^^ Doing something like this: (defn db [] (d/db conn)) and then replacing calls to db with (db) should solve this#2021-01-1108:39Lennart BuitYou also get a db-after value from d/transact !#2021-01-1111:28jcfBest practice is to use the :db-after from your transact call.
https://docs.datomic.com/on-prem/best-practices.html#use-dbafter#2021-01-1116:38PBI have my datomic transactor running from within a docker container. Upon upgrading to 0.9.6024 I am getting the following error:
Execution error (JdbcSQLException) at org.h2.engine.SessionRemote/done (SessionRemote.java:568).
Remote connections to this server are not allowed, see -tcpAllowOthers [90117-171]
I cannot find much about this. Can anybody give me any hints?#2021-01-1116:52favilahttps://forum.datomic.com/t/important-security-update-0-9-5697/379 ?#2021-01-1116:52favila(that’s a really old datomic you’re upgrading from if this is indeed your issue…)#2021-01-1116:54favilabasically if you want to connect to free or dev over tcp, you now need to set a password.#2021-01-1117:45PBThank you @U09R86PA4#2021-01-1121:57JAtkinsIs there a way to retrieve a datomic ion build id (git sha) from the running env?#2021-01-1305:05tatutI did the same, but then you can’t use :rev but have to have :uname instead#2021-01-1305:05tatutas the git tree isn’t clean#2021-01-1315:46JAtkinsWouldn't it be? Ions can only push from clean git trees*, and if you ignore the build repos nothing should change anyway. I used git rev-parse HEAD#2021-01-1219:46jaretHi all! We're looking to get an informal assessment of how Datomic Cloud customers access cloud. If you wouldn't mind taking some time to fill out he poll on our forums here it would be much appreciated:#2021-01-1219:46jarethttps://forum.datomic.com/t/how-do-you-access-datomic-cloud/1739#2021-01-1219:47jaretFeel free to add any thoughts/wishes/context/desires in the thread under the poll.#2021-01-1221:08john-shafferI just started an Ion, but I guess Datomic runs on JVM 8 and I hadn't considered that. I'm accustomed to using the newer date/time classes on JVM 11.
What do you typically use for dates & times on older JVMs? Is there a popular library?#2021-01-1221:14favilaJava 8 has the java.time classes#2021-01-1221:21john-shafferThanks. I guess it's just LocalDate/ofInstant that's missing, and I can adjust to that#2021-01-1221:24john-shafferI should have read the exception more carefully 😶#2021-01-1221:27clyfehttps://www.joda.org/joda-time/, clojure https://github.com/clj-time/clj-time#2021-01-1222:11ghadi@UCCHXTXV4 joda is deprecated {:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 6")}
#2021-01-1311:34souenzzo@UQ5EVP2LW
https://gist.github.com/souenzzo/b18bbe3aabc9bc04720670b5c0668cc0{:tag :div, :attrs {:class "message-reaction", :title "100"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💯")} " 3")}
#2021-01-1315:03stuarthalloway@jshaffer2112 et al, happy to get opinions on moving Cloud to JVM 11, been wanting to do that for a while. Anybody see downsides?{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 18")}
#2021-01-1315:55kennyThe only change we had to make when moving from 8 -> 11 was adding --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED to our jvm opts for Neanderthal. Everything else has been exactly the same. It’s been nice to have access to the built in http client and some of the newer date methods, like the one John referred to, without needing to wrap them. #2021-01-1320:04john-shafferCan it start as a stack parameter for the JVM version? That would give people plenty of time to work out issues like kenny's#2021-01-1421:23steveb8nI'm with John on this. If it was a reversible flag in case some lib doesn't work, that would add a lot of confidence when trying this migration{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 3")}
#2021-01-1516:11hadilsJust chiming in here -- I am in favor of moving Cloud for JVM 11. We are still in early stages of our product and the conversion won't be that painful.#2021-01-1412:43stuartrexkingDoes Datomic Cloud support attribute type :db.type/bytes?#2021-01-1412:44stuartrexkingI don’t see it in the valueTypes https://docs.datomic.com/cloud/schema/schema-reference.html#db-valuetype#2021-01-1412:49jaretUnfortunately, db.type/bytes is not supported in cloud or analytics. In supporting this value type in on-prem we saw a number of problems due to the java semantics which we discuss here: https://docs.datomic.com/on-prem/schema.html#bytes-limitations#2021-01-1412:51stuartrexkingAlright thanks.#2021-01-1412:52jaretIf this is a feature you need I'd be happy to share the use case with the team if you want to provide details. If we can't provide that type perhaps we can provide another solution that meets your needs.#2021-01-1412:54stuartrexkingI’m using a java lib for managing sessions and I’d like to store them in datomic. The sessions instances have an attribute map <object, object>. I wanted to be able to serialise the attribute map and store that in a session entity.#2021-01-1412:55stuartrexkingBasically a container of data that is semantically opaque. 😛#2021-01-1412:55stuartrexkingMight have to look at using a different storage mechanism for sessions.#2021-01-1412:56stuartrexkingUnless you have a different suggestion @U1QJACBUM#2021-01-1502:43potetmserialize to string instead?#2021-01-1502:43potetmwhat upside does bytes have over string encoding?#2021-01-1912:19stuartrexkingI considered that. What I ended up doing was using tuples for session key / value pairs. #2021-01-1420:48hkrishHello Datomic/Clojure experts,
I am trying to pull all the relevant information regarding Employees in one query. First I get a vector of all the Employee maps. Then using specter/transform or clojure.walk/postwalk, I process the vector of Employee maps and get the full maps using :db/id 's. The ref attributes are not defined as component attributes. But I need to have similar functionality. For this, I use a
(d/pull db '[*] db-id)
inside the specter transform function. (or with a postwalk function).
But my pull with the above pull statement takes nearly 10 seconds or above to fetch the whole employee maps. The questions are:
1 - Why it is taking so much time? I have, may be 200 employees at the moment. It is a SOLO stack.
2 - Is there any better/faster way to get the full maps with the :db/id's?
Thank you for any suggestions.
See the code below: I have removed irrelevant lines.
(let [ employees [
#:employee{:email "<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>",
:last-name "smith",
:emplid "PLM0015",
:job #:db{:id 101155069757724},
:full-time? true,
:first-name "Haroon",
:employee-type #:db{:id 79164837202211},
:gender-type #:db{:id 92358976735520},
}
#:employee{:email "<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>",
:last-name "smith",
:emplid "PLM0025",
:job #:db{:id 10115506975245},
:full-time? true,
:first-name "Farhan",
:employee-type #:db{:id 79164837202211},
:gender-type #:db{:id 92358976735520},
}
....................]
]
;;job :db/id is 101155069757724
;; (d/pull db '[*] 101155069757724) )
(specter/transform [ALL]
(fn [each-map]
(let [db-id (:db/id each-map)]
(d/pull db '[*] db-id) ))
employees)
;;I apply the above logic only for the map values with :db/id's.
)#2021-01-1421:19favilaIf this is datomic cloud, this is slow because it is 600 blocking request+responses in a row#2021-01-1421:20favilaThis looks like employees already came out of a pull. why not just pull everything in one go?#2021-01-1421:21favila[* {:employee/job [*] :employee/employee-type [*] :employee/gender-type [*]}#2021-01-1501:14hkrishThank you for the response. It is Datomic Cloud/Solo. I was doing it this way basically to make it dynamic. It could be Employee or it could be another Entity in the domain, like Benefit or Product etc. One function returns the complete information.#2021-01-1421:58kschltzI'm facing a few recurring issues with datomic cloud write latencies and index memory usage.
In our current setup we are transacting one event at a time, throttling them to avoid overcharging our transactor.
I was wondering if we would benefit from grouping our events before transacting, or that is not necessarily the case?#2021-01-1423:36favilaRule of thumb is to aim for transaction sizes of 1000-2000 datoms if you actually can control how changes are grouped#2021-01-1423:40favilawhen you say “issues”, what problem are you facing?#2021-01-1423:40kschltztransactions failing from time to time due to "busy indexing"#2021-01-1423:58favilacould your transactor just be undersized for your rate of novelty? is this a regular thing or something you only encounter during bulk operations?#2021-01-1423:59kschltzWe are running the biggest machines available#2021-01-1423:59kschltzand ensuring a delay of 50ms between each transact call#2021-01-1500:00kschltzThere's an interval of around a month or two between issues arise#2021-01-1500:01favilawhat is the rate of datom accumulation?#2021-01-1500:02favilaI’m not as familiar with cloud, but on-prem has a “Datoms” and “IndexDatoms” metric which counts the number of datoms accumulated in the system#2021-01-1500:03kschltz10BI#2021-01-1500:03favila10 billion?#2021-01-1500:04kschltzyes#2021-01-1500:04favilaok, that is a very large database#2021-01-1500:04kschltzaround two years of data#2021-01-1500:05favilayou maybe should talk to cognitect support about this#2021-01-1500:05kschltzWe are in touch in parallel#2021-01-1718:44ChristosHello guys,
How can I cancel a datomic transaction from within a transaction function in dev-local?
Many thanks!#2021-01-1719:07kennyhttps://docs.datomic.com/cloud/transactions/transaction-processing.html#cancel{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-01-1719:13favilaYou can also just throw {:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-01-1719:21ChristosMany thanks for the quick response guys :-)#2021-01-1907:47Matheus MoreiraHello, datomickers! Newbie question: is it good practice to use Datomic entity ids as identifiers shared with the application in a similar way that we use surrogates primary keys on relational databases?#2021-01-1908:47tatutYou probably should create application level ids if you are sharing them with any external place#2021-01-1908:47tatutLike other systems or even having them in URL routes or similar#2021-01-1912:08souenzzoYou don't control :db/id. In some cases, like migrations/restores, it can change, so you should not use it in things like URL's{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-4"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 3")}
#2021-01-1912:16Matheus MoreiraI see. But Datomic doesn’t have anything like a long sequence generator, right?#2021-01-1912:25tatutrandom UUIDs are usually good{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-5"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 3")}
#2021-01-1912:29tatutif you really need a sequence, you could create a tx fn to do it and use an attribute with nohistory#2021-01-1912:29tatutbut usually opaque identifiers are best#2021-01-1913:48Matheus MoreiraYes, I think that an UUID is enough, really. My context here is that I am trying to model using Datomic the (relational) db model of the app that I maintain, and some of its tables has this weird combination of auto-increment long key plus an UUID. Thinking about it now, the pair doesn’t make much sense, the UUID would be enough. I think that the long key exists because of some legacy stuff.#2021-01-1913:51favilaIf you end up needing both, you can use a unique tuple #2021-01-1916:49Matheus MoreiraI just started maintaining this system, I think that both keys are unnecessary… I would drop the long key and keep only the UUID if I started it in Clojure/Datomic. 🙂#2021-01-1911:50thumbnailI have a question about tuples; Say that I have a category with tags, and these tags are component/many of this category. Now, I’d like to add a composite (tuple) key to this tag entity that says `[tagName, category]` is unique, but there is no explicit relation from tag -> category. Do I have to reverse this relation / lose the component-ness to add this composite key?#2021-01-1913:52favilaYes, or you can update this value yourself as a denormalization#2021-01-1913:52favilaIe make it a heterogeneous tuple value instead of a composite one#2021-01-1914:37thumbnailI'll try that! The relation never changes in my case, so duping the relation the other way around is not too bad.
Was hoping for something automatic, so it wouldn't get out of sync. but oh well#2021-01-1922:37rcruppIf I'm deploying an api service using datomic.lambda.api-gateway/ionize can I start a long running listener (like a kafka consumer) in the handler I pass in?#2021-01-2019:41TuomasI haven't personally done so, but I'm pretty sure you can
https://forum.datomic.com/t/kafka-consumer-as-an-ion/823#2021-01-2020:13rcruppExcellent! Many thanks#2021-01-2017:23ChristosHello, I have created a function which makes a query to datomic dev-local.
When I call it as a normal function the query returns data.
When I call it as a transaction function on the same db the query returns nothing.
Any ideas?! Many thanks#2021-01-2018:38favilaConfirm, your fn signature looks like (defn fn-name [db arg1 arg2 ,,,]) , you call it “normally” like (fn-name db arg1 arg2 ,,,) and you “call” it in a tx like [,,, ['namespace/fn-name arg1 arg2] ,,,] (note fully qualified, implicit db arg omitted)#2021-01-2019:41ChristosExactly Francis, that is what I do and with exactly the same arguments.#2021-01-2019:43favilaHow do you know the results are different?#2021-01-2019:46ChristosI throw an exception in the transaction and I include in the message the result of the query. In the case of the transaction it is an empty vector.#2021-01-2019:47ChristosIt does not find what the non-transaction related function call finds.#2021-01-2019:47favilaAnd you are sure they are the same db?#2021-01-2019:49ChristosIn the non-trans version I use (d/db conn) using the same conn as the one in the trans version. So I guess they are the same.#2021-01-2019:50ChristosAnd I dont change anything in between#2021-01-2017:45jaretCognitect dev-tools version 0.9.58 now available. https://forum.datomic.com/t/cognitect-dev-tools-version-0-9-58-now-available/1751{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 9")}
#2021-01-2019:19jarethttps://forum.datomic.com/t/new-client-release-for-pro-0-9-66-and-cloud-0-8-105/1752#2021-01-2106:32David PhamDoes anyone have an example of datomic running on MS SQL/SQL Server? :) I am interested in the config files for the tables.#2021-01-2113:03raymcdermottdoes the datomic team plan to check for CVEs prior to release? We are finding some from the most recent release and it would be good if they were fixed at source.#2021-01-2113:48jaretThis is on our radar we have an internal list that includes:
CVE-2018-10237, CVE-2020-8908, CVE-2015-6420, CVE-2017-15708, CVE-2019-10086
Initial review has found it to not be a straightforward bumping of a dep. If you have another CVE that we need to look at, please share the ID here, on the forums (http://forum.datomic.com), the knowledgebase at http://ask.datomic.com or e-mail us directly <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>#2021-01-2113:49jaretslack is the devil for tracking these things because of archiving.#2021-01-2117:51raymcdermottthese version bump works in our tests though I would like to understand failure cases obviously. I’ll follow up on the official site.#2021-01-2113:04raymcdermottcom.datomic/datomic-pro {:mvn/version "1.0.6222" :exclusions [commons-beanutils/commons-beanutils
org.apache.httpcomponents/httpclient]}
;; To remove CVE warnings from Datomic deps
commons-beanutils/commons-beanutils {:mvn/version "1.9.4"}
org.apache.httpcomponents/httpclient {:mvn/version "4.5.13"}#2021-01-2113:05raymcdermottI know we are all committed to hating semantic versioning but it’s working for us here#2021-01-2114:17rolandhello, I see that now index-pull has a :reverse option. Is there any plan to add it to the index-range function ?#2021-01-2115:06jaretHi @UBV3GR85Q it hasn't come up but I think this would make a good feature request. We are trying to track feature requests by polling http://ask.datomic.com. If you could log this as a feature request there we could see how much of the community would like this feature and it will work into our internal tickets for consideration.#2021-01-2511:41rolandThanks, I asked there: https://ask.datomic.com/index.php/582/reverse-order-browsing-for-raw-indexes{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-01-2122:31shieldsHello all, I was wondering if anybody has experience with Datomic Ions and using Auth0 through the API-Gateway.
Any suggestions on how to create the authorizer to hook to my Lambda Function? I basically want to be able to add the Auth token to the Header.#2021-01-2123:51Joe LaneAre you using an Ion Lambda as the custom authorizer or are you using an Ion Lambda as the http-handler? I don't see how this has anything to do with ions. Isn't this just a matter of wiring up Auth0 as a https://auth0.com/docs/integrations/amazon-cognito? Once your request makes it past Cognito, isn't it smooth sailing?#2021-01-2200:07shieldsI'm using a universal login, nothing to do with Cognito. Context is that I'm doing a refactor and the current stack had Auth0 default authorizer integration but it's only available using HTTP, not REST.
But I think I found a solution here in the forums. Just need to pass the token through the header from the request and then have a middleware verify on the backend.
https://forum.datomic.com/t/where-can-i-find-cognito-or-iam-details-from-api-gateway-when-using-http-direct/1675/7#2021-01-2205:40pyryHello! We're running Datomic Pro 1.0.6165 on-prem and are a bit puzzled as to why certain queries we think ought to be fast, well, aren't.#2021-01-2205:40pyryAs part of a sysadmin's view to our system, we provide a page listing counts of entities per domain type stored in our datomic instance. There are roughly 10 million such entities at the moment, spread quite unevenly across 80+ domain types.#2021-01-2205:40pyryRendering this page is at the moment slow, as getting the counts from datomic typically takes tens of seconds.#2021-01-2205:41pyryWhat we're doing at the moment to get the counts is roughly the following:
(->> (d/datoms db :aevt ::object/type)
(map :v)
(frequencies))#2021-01-2205:43pyryAs mentioned, this doesn't perform too well on our data. A natural alternative would of course be to query the counts instead, which I think was what we did earlier. If memory serves me correctly, this however didn't perform too well either..#2021-01-2205:43pyryNonetheless, I think it makes sense to again try
writing the query as something like so:
(d/q '[:find ?t (count ?e)
:in $
:where [?e ::object/type ?t]]
db)#2021-01-2205:45pyryQuestion: Should datomic be able to get the (count ?e) above efficiently from eg. some index metadata (if that's a thing) or will it have to essentially traverse the entire index to calculate the counts?#2021-01-2205:46pyryAdditionally, I'm wondering if I should expect a call to qseq instead of q to perform better with the above query?#2021-01-2210:33favilaI expect your datoms version to be the fastest. The query versions will be retaining all records in memory at once. There’s no way in datomic to avoid visiting every item in that index. Datomic doesn’t have index metadata of eg cardinality info or set members#2021-01-2210:36favilaYou need to cache more (larger object cache, valcache or memcached secondary cache) or a faster storage; the first query issuance fill be slow but subsequent ones will be faster (assuming the same peer performs the query each time). If that’s still not good enough, consider keeping the counts precomputed. You can perform the query then have something listen to txs using tx-report-queue or tx-range polling to keep the count up to date#2021-01-2214:41pyryAll right, thanks for this.#2021-01-2223:10bhurlowAre :limit and :offset supported in all Datomic versions with index-pull? or were those params added in more recent versions? The index-pull doc string doesn't show those options, but this page demonstrates their use https://docs.datomic.com/on-prem/index-pull.html#2021-01-2223:10bhurlowI'm using on-prem, not client#2021-01-2319:14vlad_pohIs there a book on datomic? Does datomic do those academic datalog questions (ethel is fred's brother mark is fred's dad who is ethel's mum?) type stuff#2021-01-2513:34val_waeselynckhttp://learndatalogtoday.org#2021-01-2701:40vlad_pohVery nice! Just went through it wish there was more.#2021-01-2713:32val_waeselynckYMMV, but in my experience once Datalog "clicks" there's not much to it.#2021-01-2423:01niveauverleihThere's a minor bug on the datomic tutorial https://docs.datomic.com/on-prem/tutorial.html
The code `(def types [:shirt :pants :dress :hat])
(def colors [:red :green :blue :yellow])
(d/transact conn {:tx-data (make-idents sizes)})
(d/transact conn {:tx-data (make-idents types)})
⇧` should be
`(def types [:shirt :pants :dress :hat])
(def colors [:red :green :blue :yellow])
(d/transact conn {:tx-data (make-idents types)})
(d/transact conn {:tx-data (make-idents colors)})
⇧`
where do I report that?
Also, the plural of schema is schemas or schemata, but not schema: "In Datomic, schema are entities ..."#2021-01-2423:17Joe LaneRight here is fine, thanks!#2021-01-2510:47niveauverleihHi Joe, here's one more: "So far we have created an accumulated data. " The singular of data is datum. But that sentence isn't clear to me.#2021-01-2510:52niveauverleihMaybe "So far all we have don is accumulating data."#2021-01-2423:17Joe LaneRight here is fine, thanks!#2021-01-2514:49Lennart BuitIf I have an entity with a composite tuple containing a ref to some ident (‘enum style’), say, I have a person entity for which name + gender are unique. Is it intended that I can’t pull entities by [name, gender] when gender is still an ident? (ex in thread)#2021-01-2514:49Lennart Buit;; Pull with ident
(d/pull (d/db conn) '[*] [:person/name+gender ["Lennart" :gender/male]])
=> #:db{:id nil}
;; Resolve ident first
(:db/id (d/pull (d/db conn) '[*] :gender/male))
=> 17592186045417
;; Pull with eid of ident
(d/pull (d/db conn) '[*] [:person/name+gender ["Lennart" 17592186045417]])
=>
{:db/id 17592186045421,
:person/name "Lennart",
:person/gender #:db{:id 17592186045417},
:person/name+gender ["Lennart" 17592186045417]}#2021-01-2514:53Lennart Buit(Also, majorly bad example, don’t assume name + gender are unique, but you know, example :’) )#2021-01-2517:22favilaYes, datomic does not attempt to resolve anything in the “value” slot of a lookup ref. You must provide exactly the value that would be found in the datom’s :v, which for refs is the entity id{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-01-2517:23favilaAt least, that’s the way it behaves now. It seems conceivable that it could act differently.#2021-01-2520:55NassinRigidity arises in systems to the extent that the schema pervades the storage representation or application access patterns, making changes to your tables or documents difficult.#2021-01-2520:55NassinIs this referring to be able to do row-shaped, column-shaped, graph-like, and document-like data modeling more freely with less constraints?#2021-01-2521:04NassinWas taken from here BTW https://ask.datomic.com/index.php/225/does-datomic-support-schema-less-data#2021-01-2610:29onetomI just saw https://docs.datomic.com/cloud/releases.html#704-8957
> Upgrade: The version of the Presto server running on the access gateway is now 338. This includes an upgrade to Java 11 on the access gateway.
It makes me wonder which Java version runs on the query groups?
Where is that documented?
(a quick googling for datomic cloud query group java version didn't answer it)#2021-01-2610:30onetommy colleague was just trying to run https://github.com/gnarroway/hato within an ion (just to send a slack message) and he got a
Syntax error compiling at (hato\/client.clj:1:1) error, which makes me suspect that query groups are still running on java8.#2021-01-2610:32onetomwe saw that the https://github.com/Datomic/ion-event-example/blob/master/src/datomic/ion/event_example.clj#L88-L118 is using cognitect.http, which is still not open source
(at least i cant find it under https://github.com/cognitect?q=http&type=&language=)#2021-01-2613:12Alex Miller (Clojure team)I believe the source is in the Maven artifact for that one#2021-01-2611:40onetomhttps://docs.datomic.com/cloud/client/client-api.html#client
This documentation page says :server-type can be :ion OR :cloud
It also says
> See the ion server-type documentation for more details on the server-type.
which links to https://docs.datomic.com/cloud/ions/ions-reference.html#server-type-ion
but that page doesn't mention :cloud.
The client API reference only mentions :cloud (besides :dev-local and :peer-server)
https://docs.datomic.com/client-api/datomic.client.api.html#var-client
Is there any difference between :ion and :cloud then?
Which one is preferred?#2021-01-2614:13Robert A. Randolphhttps://docs.datomic.com/cloud/ions/ions-reference.html#server-type-ion documents :ion as you mentioned.
:cloud is for the scenario when you are in a cloud system and wish to connect to another system.
We'll work on improving this documentation, thank you.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 6")}
#2021-01-2617:41onetomit hasn't even occurred to me, to have support for connecting from one Datomic system to another.
I'm just starting to use multiple DBs on one system, from one ion process. :)#2021-01-2611:41onetomrelated question:
the ions-reference.html also says:
> :endpoint "http://entry.<system-or-query-group-name>.<region>.http://datomic.net:8182/" #2021-01-2611:42onetombut that is the only mention of the word endpoint on the whole page.
it doesn't seem to explain when do i need to use the system-name and when to use the query-group-name#2021-01-2623:42stuartrexkingI’m working through the Ion tutorial. When I get to the https://docs.datomic.com/cloud/ions/ions-tutorial.html#setup-db-and-load-dataset step I get an exception https://gist.github.com/stuartrexking/2edb397b72f0ced98e03069656bc3623#2021-01-2623:43stuartrexking#2021-01-2701:20stuartrexkingI had incorrect config for region name. 🤐#2021-01-2717:38jarethttps://forum.datomic.com/t/datomic-1-0-6242-now-available/1757#2021-01-2717:39jaretCCing @favila as we made a change to attribute predicates per the support case you logged awhile back ^#2021-01-2717:40favilaWow that is an incredible relief. I was pretty sure when that ticket closed that you would go the other way and assert on all retractions, including the ones you missed#2021-01-2717:41favilaMany thanks, this makes our lives much easier#2021-01-2717:42jaretThere was a lot of discussion around this. Please let us know after you have used it in anger your thoughts.#2021-01-2717:42favilaWill do#2021-01-2717:43favilaI will tell you that having had to hammer existing values to fit into a reasonable predicate has not been fun#2021-01-2717:44jaretSorry about that. Hopefully this works much better.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-01-2801:29JohnJattribute predicates were applied on retractions too? or what was the issue here?#2021-01-2815:03favilaYes, they were applied on retractions too, mostly#2021-01-2815:55JohnJwhat problems did it cause?#2021-01-2815:59favilaIf you have existing data that violates a predicate, you can’t add the predicate you want and then get rid of bad data. You have to either compromise the predicate to fit the bad data you have (possibly allowing new bad data); or you have to get rid of the bad data first, install the predicate, then check your data again (retractions of bad data may fail in the meantime), and possibly remove the predicate and repeat.#2021-01-2816:13JohnJgot it, thx#2021-01-2816:28JohnJI wonder why they didn't took the approach of "all data must satisfy the predicate before the attribute predicate can be added".#2021-01-2816:30favilaThat would be a (potentially long) blocking transaction#2021-01-2817:43JohnJIf you don't mind, besides HA, does clubhouse runs more than one transactor to serve customers?#2021-01-2817:46favilaonly one, for now. We’re working on sharding.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-01-2717:40jaretThanks for pointing that issue out to us!#2021-01-2913:26tvaughanDoes/should https://dev-tools.cognitect.com/maven/releases/com/datomic/dev-local/maven-metadata.xml exist? I'm following up on https://github.com/liquidz/antq/issues/54 Thanks#2021-01-2913:31Alex Miller (Clojure team)I answered on the issue - this is typical for s3 repos#2021-01-2913:32tvaughanThank you#2021-01-2913:47staskHi, anyone has an idea how to implement DRP for datomic-cloud based system?
We're considering datomic-cloud and one of the requirements is to be able to move from one region to another in 12 hours in case something happens to a region.
Thanks#2021-01-3103:11joe smithhi anybody have experience using datomic ions? does it support canary deployment?#2021-01-3117:06Joe LaneHey Joe, what is your specific use-case for canary deployment? Also, could you elaborate on what you mean by “plans for serverless “?{:tag :div, :attrs {:class "message-reaction", :title "white_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✅")} " 3")}
#2021-01-3121:10joe smithI finished watching the video about Datomic Ions and it has answered my question.#2021-01-3121:10joe smithBasically by canary deployment, is it possible to do a rolling update in production? if something goes wrong then immediately revert to previously working production.#2021-01-3121:11joe smithby Serverless I believe thats what the Lambda proxies solve. All in all Datomic Ions seems to answer my needs.#2021-01-3121:11joe smiththank you for your answer Joe!#2021-02-0100:59Joe LaneRE: canary deploy, It depends again on what you mean by “something goes wrong”.
Right now, if you initiate a deployment and the ec2 instance fails to complete its code deploy steps, then the deployment is marked as failed and (in a production topology) there is zero downtime.
Now, if the code deploy succeeds on a given instance and your code starts up but the latest release of your code is “too slow” as measured by a cloudwatch metric you have set up, datomic ions can’t fail the build on your behalf because it appears to have succeeded as far as Code deploy is concerned.
Hope that clarifies things and I’m glad I could help.#2021-02-0405:41joe smithokay thank you Joe!#2021-01-3103:12joe smithalso any plans for serverless?#2021-02-0112:22ivanaHello! I have :region/city attribute with cardinality many and type string of :region entity in schema. Is there a way to change it on ref type with saving its id? I tried to change a type directly, add an aliase id->ident, both with and without clearing existing values - datomic rises an error everytime.#2021-02-0113:10favilaI think you’re asking if you can change :region/city to be ref instead of string typed? You cannot change the type of attributes. You can make a new attribute of the same name, but the old assertions will remain#2021-02-0113:12favilaI suggest for your sanity making a new attribute with a different name and working out your migration of old data to new. Only then consider changing names again; but with a large codebase it’s often just not worth it.#2021-02-0113:15ivanaI hide the details for make long story short - I created new attribute with new name & type and migrated all the data and cleared the values of old atttributes. But finally I need that old attribute name with new values.#2021-02-0113:16ivanaNow I have empty old attribute and fullfilled new but with different name#2021-02-0113:17favilarename the string attribute to something else; then rename the ref attribute to :region/city#2021-02-0113:19favilaand make sure your code is ready for that, because ref and string are not compatible types#2021-02-0113:20ivanaWhat means "rename attribute"?
[[{:db/id :region/cities
:db/ident :region/cities-old}
{:db/id :region/cities-ref
:db/ident :region/cities}]]#2021-02-0113:21ivana?#2021-02-0113:21favilayes#2021-02-0113:22ivanaok, thanks, I'l try it. I thought I need to make a retractions from schema#2021-02-0113:23favilayou cannot retract attribute schema#2021-02-0113:23favilayou can only change idents around#2021-02-0113:55ivana@U09R86PA4 thanks alot! Everything works and fine now, except little moment, that I have that old region attribute in schema without values )#2021-02-0113:56favilawell it does have values…in the past#2021-02-0113:56ivanaAh, in the history#2021-02-0223:50stuartrexkingAre there any side-effects that I should consider when transacting schema every time my application restarts? I know that if the schema hasn’t changed there will simply be an empty transaction, as per https://docs.datomic.com/cloud/transactions/transaction-processing.html#redundancy-elimination. Is there any benefit to using something like https://github.com/avescodes/conformity? Is there an idiomatic or widely used approach here? I am deploying an ions application. The ions-starter example app exposes a lambda to transact schema but that leaves open the risk of not running it when the schema changes.#2021-02-0301:28ghadi@stuartrexking I would make schema transactions part of your deployment process, not part of the app bootup#2021-02-0301:29ghadiEspecially when there are several nodes waking up to do the same work concurrently #2021-02-0301:29ghadiYou could certainly use conformity as a library or script#2021-02-0301:30ghadiI just mean: do it in a separate moment from app bootup.#2021-02-0301:34stuartrexking@ghadi Is that necessary if planning for accretion and adhering to the schema growth principle? https://docs.datomic.com/cloud/best.html#plan-for-accretion#2021-02-0301:35stuartrexkingI just don’t see the benefit of doing it out of band of an app startup.#2021-02-0301:36stuartrexkingSo what you are suggesting is, specific to ion app:
1. Push and deploy
2. Trigger a schema transaction (always or only when the schema has changed?)#2021-02-0301:37stuartrexkingReally what I want to know is why “I just mean: do it in a separate moment from app bootup.”#2021-02-0301:41ghadi(The accretion principle still applies)
If you have N nodes booting up, you have N racey processes trying to do the same work.#2021-02-0301:43ghadiWhat I’ve seen work well is:
Run tests, get a green check
Transact new schema to DB
Then deploy
The app can presume that its schema exists in the DB - or exit if it doesn’t #2021-02-0301:47ghadiThe Ion sample uses an endpoint to trigger schema installation, but in a real app you’d likely have code that expects the schema to be there, along with the schema installation code, so it’s a catch 22{:tag :div, :attrs {:class "message-reaction", :title "thinking_face"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 3")}
#2021-02-0302:01stuartrexkingThanks seems like a good approach. I’ll do that.#2021-02-0311:42kschltzWell, I transact the schema on bootup, as there are only 4-6 nodes of that particular service, no issues so far#2021-02-0312:11pinkfrog• Wanna play with Datomic Pro Starter Edition. Does the “Running more than 3 processes (2 peers + transactor)” not allowed still apply?
#2021-02-0402:07Jake ShelbyI'm seeing some behavior in datomic cloud (both remote connection, and when deployed in ions), that is contrary to what I've been led to believe as "the database as a value". When I acquire a "current" DB value from a connection, their hash is new every time I ask for it, even though the DB values have the same t basis. When I run the same test with dev-local, I do see the desired behavior, that in fact, 2 DBs of the same basis are equal. Does anybody know why the client API behaves this way, and why it's different from dev-local?
;; dev-local connection, using dl/divert-system
(let [c (conn)
db1 (d/db c)
db2 (d/db c)]
{:db1 db1
:db2 db2
:db1-hash (hash db1)
:db2-hash (hash db2)
:eq? (= db1 db2)
:type (type db1)})
;; => {:db1
;; #datomic.core.db.Db{:id "import-202123-1720", :basisT 3151, :indexBasisT -1, :index-root-id nil, :asOfT nil, :sinceT nil, :raw nil},
;; :db2
;; #datomic.core.db.Db{:id "import-202123-1720", :basisT 3151, :indexBasisT -1, :index-root-id nil, :asOfT nil, :sinceT nil, :raw nil},
;; :db1-hash -1795899545, ;; These stay the same as long as nothing more is transacted
;; :db2-hash -1795899545, ;; These stay the same as long as nothing more is transacted
;; :eq? true, ;; This was intended
;; :type datomic.core.db.Db}
;; Remote connection, using :server-type :ion
(let [c (conn "core-prod" "XXXX.core.prod")
db1 (d/db c)
db2 (d/db c)]
{:db1 db1
:db2 db2
:db1-hash (hash db1)
:db2-hash (hash db2)
:eq? (= db1 db2)
:type (type db1)})
;; => {:db1
;; {:t 3150, :next-t 3151, :db-name "XXXX.core.prod", :database-id "de0a365c-eb28-4cf4-a490-bd0bcfff8104", :type :datomic.client/db},
;; :db2
;; {:t 3150, :next-t 3151, :db-name "XXXX.core.prod", :database-id "de0a365c-eb28-4cf4-a490-bd0bcfff8104", :type :datomic.client/db},
;; :db1-hash 256930107, ** New hash values everytime I acquire **
;; :db2-hash 8175244, ** New hash values everytime I acquire **
;; :eq? false, ;; ** This is not intended **
;; :type datomic.client.impl.shared.Db}#2021-02-0405:59tatutIt is a value in the sense that passing the same db to a query will give the same results… but not in more strict senses#2021-02-0406:00tatutI wouldn’t rely on the identity or hash of the db handle#2021-02-0415:55Jake ShelbyThis was one of the biggest benefits that I saw early on: https://youtu.be/4iaIwiemqfo?t=3755#2021-02-0405:41joe smithWhat is a Query group? If I'm in production, do I also need to launch a Query group server as well? For development, is the Solo instance missing any features from the production Datomic?#2021-02-0410:05danierouxThe solo instance misses https://docs.datomic.com/cloud/ions/ions-tutorial.html#http-direct and for development, I would start with https://docs.datomic.com/cloud/dev-local.html#2021-02-0410:06danierouxA query group is a separate cluster that reads and caches the data, so it doesn’t affect your other clusters. It becomes useful when you have different workloads on the system#2021-02-0406:56joe smithis it possible to build a REST api around Datomic so that it can be called by other API end points sort of like microservice architectures? I want to have a serverless (AWS Lambda) web facing API for handling application logic and then use datomic to keep track of financial transactions (in particular double entry ledger). I want to be able to do /datomic/ledger/user234/credit/1000 and it should make that changes on the datomic. I guess this is where the lambda proxies come in that can expose datomic functions on AWS API Gateway/Lambda ?
With the above scenario, do I lose any performance benefits for reading, (I have a vague understanding of Peer and caching) by calling everything over the wire (rest api)? If so, how should I be architecting here?
Lastly, what AWS Database do you recommend for storage? If I use RDS MySQL instance, will I be able to do SQL queries as well? Or is it completely opaque regardless of what underlying DB you use?
Thanks again!#2021-02-0407:58tatutYou certainly can make a REST API around your Datomic stuff#2021-02-0407:58tatutDatomic cloud doesn’t allow you to specify what storage to use afaict#2021-02-0410:02danierouxIt is completely opaque, you cannot specify storage but: https://docs.datomic.com/cloud/analytics/analytics-concepts.html will give you SQL view into the database
https://www.youtube.com/watch?v=thpzXjmYyGk is a deep dive on Datomic Ions, worth watching#2021-02-0501:51joe smiththanks guys!#2021-02-0419:52eraadHi! I would appreciate pointers on how to avoid an IAM user from deleting a database#2021-02-0420:13SchmohoI am super confused by the Datomic product range ...
I just want to develop a personal project that needs to run only locally on my machine using Datomic. Now I understand I should either use Datomic Free or Datomic Starter, but now I've also come across dev-local which apparently provides the Client API - does that then not entail any storage? What exactly is dev-local?#2021-02-0420:17kennyHave you seen https://docs.datomic.com/cloud/dev-local.html?#2021-02-0420:32SchmohoAh yeah reading this a little more thoroughly would have answered the storage question ... the "why the diversity" question not really though. 😉
thanks#2021-02-0420:35kennyThere's two different products -- Cloud and On-Prem. Datomic free is mostly deprecated, I think.#2021-02-0420:17Alex Miller (Clojure team)dev-local has storage and is probably a great match for this if it's a personal project{:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 9")}
#2021-02-0420:29SchmohoOk thanks ... why are there three "free to use" versions of Datomic? Is it all about licensing?#2021-02-0420:32Alex Miller (Clojure team)I would defer to someone on the Datomic team to answer that more fully so that I don't say something dumb :)#2021-02-0501:55joe smithshould I be using Solo or Datomic Free locally to learn? It's been almost 5 years since I last built a rudimentary double entry ledger on Datomic. Right now I am learning basics of clojure, datalog and finally getting my hands dirty with datomic again. I guess if I am trying to build a MVP I would be better off with Solo which I believe is like $5 CAD / month?#2021-02-0501:58kennyIf you just want to mess around locally, check out dev-local. #2021-02-0502:20stuartrexkingNo matter what I do, I can’t get ions.cast/event to output to :stdout. I’ve paired down to a simple clojure repl, the first line is to require and call
(cast/initialize-redirect :stdout)
but no luck. Any ideas on how to solve this or where to look?#2021-02-0502:21stuartrexking(.println System/out "10")
outputs to stdout, so what gives?#2021-02-0502:31Joe LaneAfter you redirect to :stdout then your calls to cast/event and cast/dev are redirected. #2021-02-0502:39Jake ShelbyI'm still having this problem as well, I reported it a while ago ....#2021-02-0502:41Jake ShelbyI feel like it worked a long time ago, in my project, but stopped working at some point - make's me think that some dep I added made it stop working, but I haven't tested that theory yet#2021-02-0503:01stuartrexkingSo interesting observation. If I call event immediately after initializing, then it works fine.
(cast/initialize-redirect :stdout)
(cast/event {:msg "Cast Initialized!"})
If I require other dependent namespaces and initialize other parts of my system, then call event, it doesn’t work.#2021-02-0503:01stuartrexkingNot sure what’s going on inside /event#2021-02-0503:02stuartrexkingI suspect some other lib or one of the deps is messing with
*out*
#2021-02-0503:02stuartrexkingHard to tell without looking at /event#2021-02-0507:19danierouxWhat I saw was that if cast/event happens before cast/initialize-redirect - the redirect never happens.#2021-02-0507:20danierouxSo I had to ensure that cast/initialize-redirect happens first, on REPL startup even.#2021-02-0605:30aaroncodingHey if I'm using the client api in my pedestal app, how expensive is it to run d/connect and d/db?
In other words, should I try to ONLY get a conn and db once at startup? Or is it acceptable to do it more ad hoc?#2021-02-0610:04eraadThe pattern I have seen in examples and adopted is getting a conn and a db in an interceptor and reuse it across the chain.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-02-0610:05eraadThey should not be expensive#2021-02-0610:08eraadThis is a good reference:
https://github.com/cognitect-labs/vase/blob/d882bc8f28e8af2077b55c80e069aa2238f646b7/src/com/cognitect/vase/routes.clj#L37#2021-02-0612:44aaroncodingThanks!#2021-02-0616:09Joe Lane@UE1747L7J getting a db is cheap, getting a connection is a bit more expensive. You can memoize the creation of a connection and reuse it because it is thread-safe.
Like @U061BSX36 said, You probably want to get the conn and db in an interceptor. You can also look here for a vanilla pedestal + ions approach. https://github.com/pedestal/pedestal.ions
You definitely want to get a new db on every request, otherwise your application will have old data. {:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-02-0613:29bendyI set up a datomic cloudformation template, then following the ion tutorial, tried to split the stack. I deleted the master stack, but when I went to recreate, I received the following error on step 2 (Specify Template)
The following resource types are not supported for resource import: AWS::IAM::Policy,AWS::IAM::Policy,AWS::IAM::Policy,AWS::IAM::Policy,AWS::IAM::Policy,Custom::ResourceName,AWS::EC2::DHCPOptions,Custom::ResourceCheck,Custom::ResourceQuery,Custom::ResourceQuery,Custom::ResourceQuery,Custom::ResourceQuery,Custom::Resource,AWS::EC2::VPCGatewayAttachment,AWS::EC2::VPCDHCPOptionsAssociation,AWS::EC2::VPCEndpoint,AWS::EC2::VPCEndpoint,AWS::EC2::SubnetRouteTableAssociation,AWS::EC2::SubnetRouteTableAssociation,AWS::EC2::Route,AWS::EC2::SubnetRouteTableAssociation,Custom::ResourceCheck,AWS::EFS::MountTarget,AWS::EFS::MountTarget,AWS::EFS::MountTarget,AWS::ApplicationAutoScaling::ScalableTarget,AWS::ApplicationAutoScaling::ScalableTarget,AWS::ApplicationAutoScaling::ScalingPolicy,AWS::ApplicationAutoScaling::ScalingPolicy,Custom::Resource
#2021-02-0613:29bendyDoes anyone know what could have gone wrong? I followed the docs to the best of my ability, I have not done anything custom.#2021-02-0613:31bendyInterestingly there are two create stack buttons in the cloudformation console. The one in the top right is giving me the above error. Clicking the create button in the middle of the screen allowed me to successfully create from the template#2021-02-0613:32bendy#2021-02-0613:48bendyAh upon further investigation I seem to have selected the wrong option from the top right dropdown. I thought I had tried both the options - only With new resources works#2021-02-0620:27bendyAfter a long day of configuration, I finally got datomic cloud configured and the cloud formation stack split. I wrote a small function, added it to my domaitn/ion-config.edn, pushed it successfully, but when I try to deploy it it fails immediately on the DownloadBundle event. In order to get the CLI I did have to add a bunch of permissions manually to my user in IAM (even so far as temporarily granting administrator access just to download com.datomic/ion), so I'm inclined to think it's a permission issue.
I can't find anything in the troubleshooting area of the docs. Does anyone have any suggestions on what might be the issue? I'm not even sure where to look in AWS to get more info as to what is going on/failing#2021-02-0708:27bendyAh go to sleep, wake up, and find the solution immediately. As always happens. Followed the steps here and the ec2 instance now has proper s3 permissions: https://forum.datomic.com/t/ion-deploy-fails-due-to-access-denied/685/6#2021-02-0718:06niveauverleihIn a recursive pull expression, is there a way to specify both the limit and the attributes ?#2021-02-0719:50kennyHow do Datomic queries handle a relation binding input with a nil variable? e.g.,
[:find ?release
:in $ [[?artist-name ?release-name]]
:where [?artist :artist/name ?artist-name]
[?release :release/artists ?artist]
[?release :release/name ?release-name]]
;; args
[db [["John Lennon" "Mind Games"]
["Paul McCartney" nil]]]#2021-02-0803:59souenzzo@kenny some cases you can bind a ?var to nil, and that var will not match any other value.
But there is some cases (maybe ground ? i do not record) that datomic complains about nil.#2021-02-0818:54joshkhdatomic is not happy when binding to nil values which are allowed in hetero tuples https://forum.datomic.com/t/nil-value-in-heterogeneous-tuple-throws-a-nullpointerexception/1693#2021-02-0819:08kennyHuh. Odd.#2021-02-0819:32dogenpunkGood afternoon! Cloudfront is returning 403 errors when I try to access https://docs.datomic.com#2021-02-0819:35Alex Miller (Clojure team)yes, there is doc maintenance under way and the docs are going to be unstable for the next few hours#2021-02-0819:35Alex Miller (Clojure team)sorry about the interruption#2021-02-0819:35dogenpunkOk, thanks!#2021-02-0819:36dogenpunkIs there a status page I should check instead of making a fuss here?#2021-02-0819:37Alex Miller (Clojure team)no, fine to make a fuss here :)#2021-02-0819:37dogenpunkRight on, are the docs packaged up anywhere for local reference in these situations?#2021-02-0819:39Alex Miller (Clojure team)not to my knowledge. it's a static site so is generally always available but having a somewhat unique situation at the moment#2021-02-0819:40dogenpunkRight on, I’ll make note of it. Thanks again#2021-02-0900:19jaretWe have fixed the issue with our docs not working. Big thanks to @audiolabs for fixing everything!#2021-02-0909:26Ben SlessHello, I have a question regarding recursive pull syntax - is there a syntax for pulling non component entities?#2021-02-0912:54joshkhlike you implied, component entities pull recursively. for non-component references you can also pull recursively using this syntax: https://docs.datomic.com/cloud/query/query-pull.html#recursive-specifications#2021-02-0914:34val_waeselynckAm I blind or is there a bug in the Datalog engine?
(d/q '[:find ?e ?a :in $ ?user :where
[?e ?a ?user]]
db my-user-lookup-ref)
=> #{}
(vec (d/datoms db :vaet my-user-lookup-ref))
=>
[#datom[17592186084693 110 17592186072420 13194139573588 true]
#datom[17592186084696 114 17592186072420 13194139573591 true]
...
#datom[17592186072587 213 17592186072420 13194139561484 true]]#2021-02-0914:49favilaIf the attribute cannot be resolved at query “initialization” time, the “v” slot will only match exact matches. It won’t coerce values to an entity id because it doesn’t know that the attribute is a ref until runtime#2021-02-0914:50favila(d/q '[:find ?e ?a :in $ ?user :where
[(d/entid $ ?user) ?user-id]
[?e ?a ?user-id]]
db my-user-lookup-ref)
try this#2021-02-0914:51favilaor use explicit ?a values visible to the query parser (e.g. with [(ground [:attr1 :attr2]) [?a ...]]#2021-02-0914:52favilaI suspect this is by design for performance, but I’ve never heard cognitect say one way or the other#2021-02-0915:14val_waeselynckThanks @U09R86PA4#2021-02-0915:15val_waeselynckThe workaround does the trick, but the behaviour is still quite surprising. If it's not a bug, I think it at least deserves to be documented.#2021-02-0915:59favilaIf you’re looking at an :vaet index and don’t know that you want specifically ref :a values, it’s kind of a correctness issue--how will a query engine know it should interpret your V as a lookup ref? Anything you do might be surprising#2021-02-0915:59favilaE.g., suppose there is a literal tuple you want to match, then entid would be wrong#2021-02-0915:59faviladoing both might also be surprising#2021-02-0920:53val_waeselynckI see your point, but I find it debatable 🙂 from a perspective of logical correctness alone, I could imagine the behaviour being different depending on what attribute is considered.
More importantly: your objection is a good point, but not an immediately obvious one. Arguably, the mere fact that we are debating it calls for official clarification.#2021-02-0920:53val_waeselynckAnyway, thanks for taking the time!#2021-02-0914:34val_waeselynckShould I file an issue?#2021-02-0918:43kennyIs it okay to pass a db created with datomic.client.api/db to a function in the async client api (e.g., datomic.client.api.async/q)?#2021-02-0918:58micah“Error communicating with HOST localhost on PORT 4334” with fresh datomic pro install. Works on other computers, just not this particular laptop. Anyone seen this before?#2021-02-0919:54micahSOLVED: we were running the transactor on Java 15{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 3")}
#2021-02-1016:38hadilsI am setting up WebSockets for my app with lambda functions. Which is more appropriate: DynamoDB or datomic cloud (which I am using for my database) with noHistory turned on?#2021-02-1017:42joshkhi used Datomic Cloud with noHistory because it was easiest (for me) to store values in a database without having to coerce data types, for example UUIDs. DDB gives me a headache. it sounds like you're well on your way, but in case it helps i have an example project that handles multiple clients/sessions per database user via the AWS API Gateway. the docs are half baked though.
https://github.com/joshkh/datomic-ions-websockets#2021-02-1017:52hadilsThank you @U0GC1C09L !#2021-02-1017:56joshkhsure! starting from scratch, it took me a little bit of trial-and-error to understand how everything connects, so if you have any questions then feel free to ping me. i'm interested in ions/websockets as well 🙂#2021-02-1017:48joshkhi'm having some trouble working with cast locally. can anyone see what i'm doing wrong here?
https://docs.datomic.com/cloud/ions/ions-monitoring.html#local-workflow
(require '[datomic.ion.cast])
=>
(cast/initialize-redirect :stdout)
=> :stdout
(cast/event {:msg "test"})
Execution error (IllegalArgumentException) at datomic.ion.cast.impl/fn$G (impl.clj:14).
No implementation of method: :-event of protocol: #'datomic.ion.cast.impl/Cast found for class: nil
com.datomic/ion and com.datomic/ion-dev are definitely in my tree clj -Stree#2021-02-1019:00Joe LaneBounce your repl, then starting afresh, first initialize, then cast. Right now if you cast before initializing it throws that error and will keep returning it until a repl restart. #2021-02-1211:36fmnoisejust understood that I can get connection from entity using such approach
(defn entity-conn [entity]
(-> entity d/entity-db :id d/connect))
what are downsides of that? The idea is to pass either db or entity instead of passing connection to functions which performs changes#2021-02-1211:43kschltzI believe that as long as you're aware that it is only connecting to a named db, not considering it's point in time, you'll be ok. You may just encounter an overhead by constantly connecting to you datomic system#2021-02-1211:43favilaDon’t do this evar 🙂. 1) :id is private 2) :id isn’t always a connection string. I doubt this works for in-memory dbs. 3) not getting a connection is liberating because now you know a whole tree of function calls can’t mutate the db. If you do this, you can no longer enjoy that feeling. 4) using entity-db already assumes that the function is getting an entity instead of an equivalent map. This is soapbox territory: IMO, d/entity, while convenient and cool, was a mistake because it makes it very easy to lose track of dependencies in large codebases over the long-term (i.e. what does this tree of functions need to read to do its job). Prefer d/pull for new development, which makes that explicit and also eases migration to cloud if you ever do that in the future.{:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 15")}
#2021-02-1211:44kschltzthat's a great point#2021-02-1211:45favilaThis also breaks the “use a consistent db value for a unit of work” best practice: https://docs.datomic.com/on-prem/best-practices.html#consistent-db-value-for-unit-of-work#2021-02-1212:08fmnoisethanks!#2021-02-1215:14dpsuttonwe did something similar to this at a previous job. I built a type that would record the different access to entity so we could create a pull at the top of the chain that would supply all of the necessary values below. was quite fun but the migration was still kinda scary. its not a great place to be and you will have to remove (or will desperately want to remove) all of this stuff at some point in the future#2021-02-1218:22fmnoisebtw link to Log API doesn't work https://docs.datomic.com/log.html
link is placed here https://docs.datomic.com/on-prem/best-practices.html#use-log-api#2021-02-1218:23fmnoiseprobably it should be https://docs.datomic.com/on-prem/api/log.html#2021-02-1321:07genekimHello, Datomic folks — I’m dusting off the code that I wrote to research the Twitter relationship graph (which I referred to in my Conj2019 talk, which I’d like to clean up and publish). But this time, my goal is to archive and organize some notes I’ve been tweeting out.
I’m pondering on how to store this in Datomic — I’d like to just store the entire tweet data structure that Twitter returns as an EDN string, and parse out new fields as I need. In fact, if I could just store the entire data structure, I’m not even sure if I want to pull out and store the tweet text as a separate entity. (So why store it? It’s a pain in the butt to query the Twitter API because of rate limits, and retrieval is not quite as easy as one would hope: they don’t even support pagination.)
The problem is the size of these data blobs: average is about 2.5K, and max is around 7K.
Does anyone have any advice on how you’d tackle storing this in Datomic? I’m not loving the idea of storing it in S3, in aggregate form or individually, mostly because it’s one more thing to access/manage. But I’m open to any/all advice on how to approach this!
I’ll post a sample tweet in reply, as well as distribution of
(->> tweets
(map str)
(map count)
sort)
Thanks in advance!#2021-02-1321:08genekimAn approx 2K EDN data structure of a tweet:
{:in_reply_to_screen_name "RealGeneKim",
:is_quote_status false,
:coordinates nil,
:in_reply_to_status_id_str "1360654233322676226",
:place nil,
:geo nil,
:in_reply_to_status_id 1360654233322676226,
:entities {:hashtags [],
:symbols [],
:user_mentions [{:screen_name "QuinnyPig",
:name "Corey Quinn",
:id 97114171,
:id_str "97114171",
:indices [0 10]}],
:urls []},
:source "<a href=\"\" rel=\"nofollow\">Tweetbot for iΟS</a>",
:lang "en",
:in_reply_to_user_id_str "19734656",
:id 1360654514076852224,
:contributors nil,
:truncated false,
:retweeted false,
:in_reply_to_user_id 19734656,
:id_str "1360654514076852224",
:favorited false,
:user {:description "WSJ bestselling author: Unicorn Project! DevOps researcher/enthusiast. Coauthor: Phoenix Project, Accelerate. Host of The Idealcast. Tripwire founder. Clojure.",
:profile_link_color "1DA1F2",
:profile_sidebar_border_color "C0DEED",
:is_translation_enabled false,
:profile_image_url "",
:profile_use_background_image true,
:default_profile true,
:profile_background_image_url "",
:is_translator false,
:profile_text_color "333333",
:profile_banner_url "",
:name "Gene Kim",
:profile_background_image_url_https "",
:favourites_count 7546,
:screen_name "RealGeneKim",
:entities {:url {:urls [{:url "",
:expanded_url "",
:display_url "",
:indices [0 23]}]},
:description {:urls []}},
:listed_count 1826,
:profile_image_url_https "",
:statuses_count 42237,
:has_extended_profile false,
:contributors_enabled false,
:following nil,
:lang nil,
:utc_offset nil,
:notifications nil,
:default_profile_image false,
:profile_background_color "C0DEED",
:id 19734656,
:follow_request_sent nil,
:url "",
:translator_type "none",
:time_zone nil,
:profile_sidebar_fill_color "DDEEF6",
:protected false,
:profile_background_tile false,
:id_str "19734656",
:geo_enabled false,
:location "ÜT: 45.527981,-122.670577",
:followers_count 49973,
:friends_count 1475,
:verified false,
:created_at "Thu Jan 29 21:10:55 +0000 2009"},
:retweet_count 0,
:favorite_count 6,
:created_at "Sat Feb 13 18:18:10 +0000 2021",
:text "@QuinnyPig I wish it were always easy to find Twitter handle of blog authors.
Wanted to acknowledge Techy's great work."}#2021-02-1321:08genekimSorted distribution of string length of these EDN representations.
(2423
2505
2506
2530
2537
2558
2610
2617
2618
2625
2633
2640
2648
2651
2652
2653
2654
2655
2675
2677
2678
2679
2689
2702
2702
2707
2707
2707
2707
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2708
2710
2712
2712
2725
2726
2727
2727
2729
2730
2732
2735
2735
2755
2769
2776
2786
2788
2798
2804
2804
2804
2804
2804
2805
2805
2805
2805
2805
2805
2805
2805
2805
2805
2805
2805
2805
2805
2805
2805
2805
2805
2805
2805
2805
2805
2805
2805
2817
2820
2831
2831
2846
2848
2894
2909
2914
2927
2930
2930
3598
3847
4240
4924
5092
5262
5435
5846
6656
6950)
#2021-02-1322:24Joe LaneHey @U6VPZS1EK, long time no see!
A few questions and recommendations:
1. Are the tweets considered immutable by now or do you need additional updates if they happen in the future (more comments, additional likes, etc.)
2. How many tweets do you have?
3. Unfortunately, I would not recommend storing big string blobs in Datomic. It can turn into a big head for a variety of reasons which I'm happy to go into more detail about if you're curious.
4. Will the tweet data be the only data in the database or is it a larger system?
5. Will you be running analysis over the tweets, or is this simply an archive of the data indexed by id?
6. Would you be interested in generating a schema for the tweet data? There are a few fun ways to analyze an edn datastructure and generate a schema. I could work out a close approximation to get you started if you sent me an edn file of one of the tweets (smartquotes are evil).#2021-02-1322:36esp1I would also be interested in understanding why storing big string blobs in Datomic is problematic - and also what would be considered 'big'. I'm assuming it may adversely impact indexing performance? I have a similar problem to @U6VPZS1EK and have considered storing the immutable large files in S3 under a uuid path and storing the path references in Datomic, but it would be nice if Datomic could support this use case out of the box. Are there any plans for Datomic to support this kind of thing in the future?#2021-02-1322:47genekim@U0CJ19XAM Great to hear from you — I hope all is well! And thanks for the thoughtful questions — can’t wait to see where you go with this! 🙂
1. immutable, for sure.
2. about 50K tweets
3. For sure, after some research, I’m thinking this is wise. I suppose any database would do, right? (just store, say, MySQL row id into Datomic, as well as any parsed entities?)
4. I’m adding the tweets to one that was primarily about users, because it seems like it could be fun
5. At this point, no analysis… but I’m discovering processing is more significant than I had thought. I think it’d be super useful to extract all the graphics links, which would be great to store in Datomic.
6. Thanks for the offer — how about I post the schema when I’m a little further along.
(After exploring the data, I think I’m going to extract all the images, which I’ve wanted to do for years — there are some pictures from conferences I know I either posted on Twitter, or just retweeted them.)
So, I guess I’m going to pour all these 35K JSON entities into a CloudSQL MySQL table that I have laying around, and then transform/load into Datomic.
Super helpful, Joe! Thank you!#2021-02-1323:27ghadiHi @U6VPZS1EK. Adding to what @U0CJ19XAM said, you've already done the important work of understanding the blob's size distribution.
Blobs give you no leverage in Datomic. Extract and transact the information you want to query through, then add a Base64(SHA256(blob.json)) datom alongside, to point back to the blob. (Then you can store the blob on disk, EFS, S3. whatever){:tag :div, :attrs {:class "message-reaction", :title "ok_hand::skin-tone-3"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 1")}
#2021-02-1323:29ghadicontent-addressing is a nice way to punt on the decision on how to transform the important parts of the source data#2021-02-1323:35genekimHey, @U050ECB92! I did a double-take on your suggestion, because it was so startling.
I was thinking that for each tweet, I’d store with it a, say, :mysql-tweet-id , so I could retrieve the original JSON.
You’re suggesting that I not do that, and reference it by the hash of the JSON (e.g., tweet-sha) with each tweet, to completely remove any assumption of where that tweet is stored, right? (mysql, S3, etc.)
Do I have that right? (my reaction to “content-addressing”: 🤯🤯)#2021-02-1323:36ghadiyeah just tweet-sha, not mysql-tweet-sha#2021-02-1323:36genekimAwesome! Thank you, all!#2021-02-1323:37ghadiotherwise in 5 years you'll wonder why the program is querying Postgres for data with mysql-tweet-id in it 😉{:tag :div, :attrs {:class "message-reaction", :title "rolling_on_the_floor_laughing"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 6")}
#2021-02-1322:23esp1Q: is it possible to have a notion of identity / upsert that is localized to components within a particular entity instance? e.g. say I have entities that may contain some number of named config components:
[{:node/id "N1"
:node/configs [{:node.config/type "C1"
:node.config/value "abc"}
{:node.config/type "C2"
:node.config/value "def"}]}
{:node/id "N2"
:node/configs [{:node.config/type "C1"
:node.config/value "aardvark"}]}]
There is a limited set of config types (e.g. C1, C2), so as shown above multiple nodes may have config components with the same type (node N1 and N2 both have a C1 config component). However I'd also like to be able to upsert new values for config components.
But if try to enable upsert by adding :db.unique/identity to the :node.config/type attribute, then multiple nodes can no longer have the same config type (I'll get a unique constraint violation). Is it possible to have a notion of unique identity that is local to the components within a given entity? How would I do this?#2021-02-1401:03Joe LaneHey @U06BTJLTU, if you change the schema around a little bit you could use tuples to accomplish this. Composite tuples in particular.#2021-02-1401:23esp1Thanks for replying @U0CJ19XAM! I tried this but I'm unclear on what tuple attrs to use to make this work. I tried:
{:db/ident :node.config/parent+type
:db/valueType :db.type/tuple
:db/tupleAttrs [:node/_configs :node.config/type]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
but when I try to transact this I get :db.error/invalid-tuple-attrs . Could you give me a little more detail on what schema changes you had in mind to accomplish this?#2021-02-1401:33Linus Ericsson:node/_configs should probably just be :node/configs (no underscore).#2021-02-1401:51esp1Hi @UQY3M3F6D - I tried that as well, but it also fails with the same :db.error/invalid-tuple-attrs error (maybe because :node/configs is cardinality many?). I used the inverse :node/_configs attribute originally because I am assuming that the tuple attributes would need to be on the same entity, and :node/configs is on the parent entity and :node.config/type is on the child component. This is why I'm confused as to how to make this work.. 😅#2021-02-1401:58Joe Lane@U06BTJLTU it sounds like the combination of :node/id and :node.config/type might represent an entity you haven't named yet. Composite tuples can't use backrefs, if you must use a tuple, you will need to flatten out the relationship.
There may be other approaches to modeling your domain though, so think about it hard.#2021-02-1403:37esp1Hm. I could flatten the 1-many relation between node and config by inverting it and having the config entities point back to the node (having a :node.config/parent-node attribute on the configs). This will let me create a composite tuple with attrs [:node.config/parent-node :node.config/type], but then I can no longer use :db/isComponent to indicate that the configs are components of the node, so retracting a node will no longer automatically retract its configs. I suppose I could use a custom transaction function to handle the retractions tho.#2021-02-1404:25lispers-anonymousIs it possible to pass rules to a https://docs.datomic.com/cloud/query/query-data-reference.html#q in datomic cloud? Example in thread
Edit: I think I figured it out. I also posted my conclusions in the thread.#2021-02-1404:27lispers-anonymousWhen I try it I get an exception that says
Unable to resolve symbol: % in this context
A really simplified case looks something like this
(d/q '[:find ?t ?thing-count
:in $ %
:where
[?t :thing/id ?id]
[(q '[:find (count ?t)
:in $ %
:where
(active-thing ?t)]
$ %)
[[?thing-count]]]
[?t :thing/color "blue"]]
db rules)
I've tried shuffling around just passing in $, not using :in in the nested query, adding quotes in different places, using the map form of a query for the nested q . Nothing has worked so far.#2021-02-1404:37lispers-anonymousAs soon as I got this posted I figured it out. I can just bind the rules to a symbol like ?rules and pass it in that way.
For the curious, it looked something like
(d/q '[:find ?t ?thing-count
:in $ ?rules
:where
[?t :thing/id ?id]
[(q '[:find (count ?t)
:in $ %
:where
(active-thing ?t)]
$ ?rules)
[[?thing-count]]]
[?t :thing/color "blue"]]
db rules)#2021-02-1404:42lispers-anonymousI can't use the rules in the top level of the query when it's bound to ?rules. But I can pass in the same set of rules twice, and bind one at the top level % and the other to ?rules. What a trip!{:tag :div, :attrs {:class "message-reaction", :title "astonished"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😲")} " 3")}
#2021-02-1418:34joe smithI am thinking of using Google Firebase to power authentication and store user data for Android, iOS. For financial data (deposits, wallet balance) I am going to use Datomic on AWS. I would like to use Firebase for rapid realtime response but datomic as an "ultimate authority of truth". ex) Firebase shows deposits instantly. but this is not reflected in Datomic yet.
Is this a viable strategy? syncing firebase and datomic somehow?
or
Should I be using Datomic always? Read data, write data to datomic without relying on Firebase? It might be slow at some point if there are lot of writes (I read Datomic isn't built for large volume of writes) and I'm afraid that it might impact the user experience on mobile app side.
I am trying to justify using Datomic to my team. Since we are dealing with real money, we need an immutable log, and ability to look at snapshots of the past, meet regulations but overall concerned about the impact of network performance that might trickle to the end user (as a result of calling datomic on AWS api gateway/lambda from google firebase cloud functions) #2021-02-1505:42emJust my two cents, but if your primary concern is financial in nature with data conflict concerns and an emphasis on immutability, Datomic would be a great fit as the sole database solution. I understand that you and your team might hear about Datomic's limited write-throughput due to global transactor in a single system, but this is again extremely relative, and is only really a problem at very high throughputs, or poor product fits like realtime applications (gaming, heavy clickstream logs, etc.). I doubt for financial data you reach anything close to troubling Datomic's write throughput, and even if someday you do, Cognitect's new backing is literally Nubank, the largest digital bank in the world, run primarily on Datomic (with advanced sharding etc., I'm sure you could get support if this ever becomes a necessity).
Read latency is a more relevant problem especially if you want to force your application to use Google Cloud's stack for Cloud Functions etc., though unless this is a strict requirement of your organization, I'm not sure why AWS Lambdas alone wouldn't be good enough. AWS Cognito also works pretty nicely with lambda hooks into automatically syncing user information into Datomic with some simple setup. There's also just HTTP-direct for production query groups which is really fast and cuts out lambda latency - JWT authentication would let your mobile app directly hit this endpoint too without much fuss.#2021-02-1506:38alekszelarkHi! I know Datomic doesn’t support ordered lists. However, there are some 3rd party implementations of linked list or indexed list data structures.
We want to support some simple tables. Let’s say we have a following schema: a simple table with 2-column rows.
[{:db/ident :table/row
:db/valueType :db.type/ref
:db/isComponent true
:db/cardinality :db.cardinality/many}
{:db/ident :row/a
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one}
{:db/ident :row/b
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one}]
Then, we want to transact some data
(d/transact conn [{:table/row [{:row/a 1 :row/b 2}
{:row/a 10 :row/b 2}
; thousands of the other rows
]}])
Considering two things: all rows are committed within single transaction, and we won’t add other rows to the table down the road — can we rely on entity ids for order?#2021-02-1507:54tatutI don’t think there’s any guarantee about :db/id monotonically increasing#2021-02-1507:54tatutbest to treat them only as opaque identifiers from what I gather#2021-02-1508:54alekszelarkThank you.#2021-02-1520:39akisHey all!
What's the simplest way to use Datomic for a hobby project? I've tried using datomic ions for a week or so, but the additional complexity around AWS resulted in spending most of time at infrastructure related stuff#2021-02-1520:41jjttjjHave you looked at dev-local? It seems designed for what you describe https://docs.datomic.com/cloud/dev-local.html#2021-02-1520:47akisI've tried dev-local at first for learning datalog, but main concern I have with using it in a deployed app is durability and eventual scaling#2021-02-1520:48emWhat's been a pain point for Ions? I've found it to be really simple to use and actually abstracts away a lot of the annoying AWS complexity that you usually need to do to setup nodes, VPN, etc.#2021-02-1520:51akisI've managed to follow the tutorial fine, but here's where I got stuck:
I created a lambda proxy resource which acted as a single entry point to my system, with Cognito Authorizer. It looked something like this:
(def handler
(ring/ring-handler
(ring/router
[["/api"
["/test" {:get (fn [_]
(-> (ok {:foo "bar"})
(res/header "Access-Control-Allow-Origin" "*")))}]]])
(ring/create-default-handler)))
(def handler-lambda-proxy
(apigw/ionize handler))
After enabling authorizer on that resource, preflight requests were getting rejected with 401 error (it makes sense since all requests need to be authorized).
I've enabled CORS on it, but couldn't get it to work#2021-02-1520:54akisSo definitely not issues related to datomic, but I feel like I've added a lot of infrastructure complexity, and would like to take a step back to a more simpler system#2021-02-1520:56JohnJdatomic starter#2021-02-1520:58Joe Lane@UKDLTFSE4 your problem is that your cognitive authorizor in api gateway is authenticating the options method causing the cors preflight to fail.
If instead of using ANY for your http method try explicitly making get put post and delete authenticates but leave options unauthenticated. #2021-02-1520:58akis@U0CJ19XAM does that mean I shouldn't use a lambda proxy at all?#2021-02-1520:59akisit seems that lambda proxy endpoint needs to have ANY#2021-02-1520:59emAhh, I see, that's far more of an AWS documentation encouraging complex solutions, rather than help clarifying what's going on. Cognito works great with Datomic Ions, but as @U0CJ19XAM is pointing out, it's more of an issue of the extra layer of authorization you're putting on top of API gateway.
I really suggest not using an extra lambda step - it's entirely wasted latency that you could easily bake into the actual processing of your code, given that unless you're on production, you already have a lambda ingress#2021-02-1520:59Joe LaneNot what I’m suggesting, lambda proxy is fine. It does NOT need to be ANY. #2021-02-1521:00emYou could also fix the lambda proxy, but if you want to do Cognito auth, just verify the public key with one function in your handler would save you all this hassle#2021-02-1521:03Joe Lane@UNRDXKBNY I think for "getting started" ion lambda proxy is a fantastic fit, otherwise he will need to "get started" with a production topology to get access to http-direct, kind of a non-starter. He has ALSO already correctly configured cognito, it's just a confusing interaction between api-gateway and cognito.#2021-02-1521:05Joe Lane@UKDLTFSE4 Make 5 explicit methods, each calling the same lambda. Then don't auth the options request and you should be good to go 👍
FWIW, I'm so confident this is the issue because I've hit it so many times. I brought it up with the product team literally this week as something we could better document.#2021-02-1521:05akisThanks a lot for you suggestions, I appreciate it! I'll give it another shot and try that#2021-02-1521:05emYeah that's entirely fair, although even in the Ion solo configuration, the lambda ingress into the system isn't a blocker for simply doing the Cognito JWT auth in your handler. It saves a lambda's worth of latency, adds just one dependency to your code (buddy or other JWT library you like), still uses Cognito as IDP, and also saves about $10 a month since the solo lambda is so terrible in latency unless you do provisioned concurrency.#2021-02-1521:07emThe one benefit of having an extra external step of a lambda proxy authorizer is blocking a lot of unwanted traffic without taxing your server, but this doesn't seem very applicable on a hobby or even medium startup level situation#2021-02-1521:08Joe LaneYou can set up a heartbeat on your ion so it's warm before going to provisioned concurrency. Let’s keep the costs down#2021-02-1521:12Joe Lane@UKDLTFSE4 if you're still stuck after the second try dm me and we will get it sorted out. #2021-02-1521:18em@U0CJ19XAM Yeah, you're absolutely right about the heartbeat approach, forgot about the good old cloudwatch ping since recent clients insisted on AWS guaranteed "provisioned concurrency", which is honestly basically punting on serverless entirely and going back to full instances...
On that note, I've always wondered, is there a specific reason Lambda the ultimate is on the JVM? As far as I understand, it's literally just a nice proxy/entry point and doesn't need heavy computation, so a clojure script implementation on node.js with a much less punishing cold-start ceiling seems like it'd solve a lot of these headaches. It's a feature I desperately wanted for a long time and seems it'd mesh better for serverless-esque workloads#2021-02-1521:39Joe Lane@UNRDXKBNY If you're starting a new project, the heartbeat is sufficient for keeping costs down and avoiding coldstart.
If you are scaling and need to increase the number of concurrent lambdas and worried about coldstart in this context, add a timeout to the client request and retry after you're over x% of your average latency (where you pick x). You'll likely be routed to a different lambda at that point and still finish the request faster than waiting for the coldstart.#2021-02-1521:46em@U0CJ19XAM That's a cool approach! Haven't thought about the probabilistic retry method when concurrency on lambdas factors in, super helpful. Would it be fair to say that with these methods to combat cold start, the JVM lambda has better throughput considering warm instances than other platforms with less cold-start ceiling? Love Datomic, just trying to understand the design choices as a matter of curiosity and learning#2021-02-1521:53Joe LaneGlad you're a fan!
There are many other factors besides performance that factor in to the choice of JVM lambda vs "other" like maintainability, stability, ecosystem, stdlib, etc.
I'm not going to make a general performance claim here, because the second I do, someone will throw a benchmark at me for a specific case where x is faster than y and call me a liar 🙂
Luckily, this question has nothing to do with Datomic and you can defer to all the other liars on the internet who made their own benchmarks and conclusions.
In all seriousness, if performance characteristics are critical for your use-case, YOU need to do the measurements for your scenario and decide for yourself.{:tag :div, :attrs {:class "message-reaction", :title "100"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💯")} " 3")}
#2021-02-1612:07souenzzo@UKDLTFSE4 checkout #datalog, #rdf, #asami, #datahike and many other datomic-like db's for hobby 🙂#2021-02-1701:44akis@U2J4FRT2T thanks for the suggestions, I know about datahike but didn't know about rdf and asami#2021-02-1701:48akisJust a quick update on my problem, in case it becomes relevant to anyone else. It turns out that CORS issues I was having were related to */* binary media type. After removing it, OPTION requests are resolving successfully, and other methods are properly authenticated with cognito#2021-02-1701:52Joe LaneGreat to hear @UKDLTFSE4 ! #2021-02-1613:20souenzzoCan I use db.attr/preds with database functions ?#2021-02-1613:31Lennart BuitWe seem to have hit a snag. We have on-prem, but we reach it with the client api. If we install a database function with a peer connection, and we (datomic.client.api) tx-range over this transaction it triggers an exception: Could not marshal response: Not supported: class clojure.lang.Delay.
Similarly, if we datomic.client.api/pull the database functions ident, we get the same marshal exception, whereas datomic.api/pull (with a peer conn) returns the fn without a hich. Is this a known bug?{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 18")}
#2021-02-1614:40Joe LaneHi @UDF11HLKC , could you file a support ticket and provide a stacktrace ( if there is one?)#2021-02-1614:55Joe LaneBest thing you could do would be creating a secret gist, showing a minimal repro, and then send it to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>
#2021-02-1615:53Lennart BuitCan do!{:tag :div, :attrs {:class "message-reaction", :title "100"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💯")} " 3")}
#2021-02-1621:27Lennart BuitDone, also, googling ‘datomic marshal error’ leads to a lot of comments by marshall about errors but not particularly about marshalling errors{:tag :div, :attrs {:class "message-reaction", :title "rolling_on_the_floor_laughing"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 7")}
#2021-02-1621:39Joe LaneNot sure I can do much about that one haha#2021-02-1614:11defaAs I understand queries with fulltext search require :db/fulltext in the schema for that particular attribute to be true. Is it possible to alter the schema of an existing database to allow fullext search? When I transact [[:db/add :my-thingy/name :db/fulltext true]] I get an error:
:db.error/invalid-alter-attribute Error: {:db/error :db.error/unsupported-alter-schema, ...
What am I doing wrong?#2021-02-1614:26thumbnailYou can't alter :db/fulltext , see: https://docs.datomic.com/on-prem/schema/schema-change.html#schema-alteration#2021-02-1614:27thumbnailYou can rename the old attribute and create a new attribute (with db/fulltext).
Here's an example: https://gist.github.com/ccfontes/3f566db393da14742a9a
No experience with that approach though.#2021-02-1615:15favilaUnless this is a throwaway db that you know will never get big, I highly advise never using fulltext#2021-02-1615:16favilait’s “easy” but it gives you very little power, and the lucene index has some catastrophic misbehavior. and as your application and db grows and maybe you move off to elasticsearch or something, there’s no way to easily get rid of this index#2021-02-1614:27PhilI've tried really hard to find an online example of using create-database with a sql/postgres backend. I spent the entire day yesterday trying to get this to work. I'm using datomic-pro. I can create a db using the "dataomic:free" image and my own datomic pro with "datomic:mem" but no luck with a real storage backend. I can't use cloud and don't want to be forced in to dynamodb or cassandra at this point. I'm trying to sell this stack to a new (non-clojure) group so I want to lower the number of new technologies at POC time. Any examples out there?#2021-02-1614:57Joe LaneThis may interest you and your team. Appears to use postgres https://youtu.be/QrSnTIHotZE#2021-02-1615:31Philthanks, I give that a run-thru tonight. From a quick look, I think I was using datomic.client.api . Is it correct that only datomic.api and not datomic.client.api is required?#2021-02-1615:52Joe LaneI'm not sure how best to answer that question @U120SKKNK, but if you're still stuck after running through that project tonight (BTW, the source is linked in the video descriptioin) reach out and we can figure out what's going on, sound good?#2021-02-1616:15PhilI watched more. Excellent resource. Thanks @U0CJ19XAM#2021-02-1712:48PhilAnswering my question above re on-prem and create-database https://docs.datomic.com/client-api/datomic.client.api.html#var-create-database
>
> NOTE: create-database is not available with peer-server.
> Use a Datomic Peer to create databases with Datomic On-Prem. #2021-02-1620:35Michael Stokley:db/doc is a way to annotate an attribute. can i annotate an assertion?#2021-02-1620:37Michael Stokleyhttps://docs.datomic.com/cloud/transactions/transaction-processing.html#reified-transactions#2021-02-1620:40Joe LaneHey @U7EFFJG73, any reason you don't want to make your own attribute like the example you referenced?#2021-02-1620:43Michael StokleyJoe, no, I'm not opposed to creating new attributes#2021-02-1620:43Joe LaneI think I'm not understanding your question...#2021-02-1620:45Michael StokleyI'm about to un-deprecate at attribute of ours and I'd like to include some comments about why - specifically, why we're changing our mind about the deprecation. :db/doc doesn't seem appropriate, since it's per attribute (i think).#2021-02-1620:46Michael Stokleyincluding the comments against the transaction entity would work, too.#2021-02-1620:47Michael Stokleyi'm just curious whether i could go even more fine grained and include a comment against a specific assertion within a transaction. that was my original instinct, before i knew about the per-transaction annotations.#2021-02-1620:47Michael Stokleyam i making any sense?#2021-02-1620:48Joe LaneI would attach either your own attribute or db/doc to the transaction entity if I was in your case.
I think I understand the question. It's about documenting a change about 1 of n assertions within a transaction. Specifically, can you reify and individual assertion within a transaction of many assertions, right?#2021-02-1620:49Michael Stokleyyep! and per-tx is perfect. sounds like assertions are not reified in the same way, which is fine.#2021-02-1620:50Joe LaneGreat, does this resolve things then?{:tag :div, :attrs {:class "message-reaction", :title "raised_hands"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙌")} " 3")}
#2021-02-1620:50Michael Stokleyyep, thanks @U0CJ19XAM#2021-02-1620:50Joe LaneGreat 👍 Always nice to hear from you @U7EFFJG73!#2021-02-1620:53Michael Stokleylikewise!#2021-02-1715:03joshkhcollisions aside, are there any indexing and/or query performance benefits of using a uuid vs a string as some entity identifier? for example
{:player/id #uuid"cb7afbf9-95ca-4c5b-af42-b096342bae61"}
vs
{:player/id "a4st390xrvskm9ecm452jbn"}#2021-02-1715:11tvaughanI asked a similar question, https://github.com/zelark/nano-id/issues/12#issuecomment-625522643
The answer appears to be that there's not much of a difference, https://clojurians-log.clojureverse.org/datomic/2020-05-08/1588924450.320800 (currently getting a gateway timeout error)#2021-02-1715:24joshkhthat's perfect, thanks @U0P7ZBZCK. go figure, i prefer UUIDs but i have to incorporate some string ids coming from a non-Datomic system. :man-shrugging:{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-02-1808:26tatutIf I remember correctly, there was some talk about backup/restore functionality for datomic cloud being in the works, but can’t find any news about it. Is that still being worked on?#2021-02-1819:01ennam I misunderstanding the :limit option to index-pull?
(count
(d/index-pull db {:index :avet
:selector [:db/id]
:start [:story/group group-id]
:limit 5})))
10
I would expect to get no more than 5 results back. I get back 10 results (the total number of matching results) no matter what limit I specify.#2021-02-1819:08ennis this feature implemented only for Datomic Cloud? It’s described in the on-prem index-pull documentation.#2021-02-1819:09Joe LaneYou can Call (take 5 (d/index-pull ...#2021-02-1819:10ennYes, I know, but I was planning to use :limit in conjunction with :offset to do pagination without realizing the full collection of results. (`:offset` does not appear to have any effect either for me.)#2021-02-1819:26Joe Lane@enn Do you have a link to the docs you're reading?#2021-02-1819:26enn@lanejo01 https://docs.datomic.com/on-prem/query/index-pull.html#2021-02-1819:27Joe LaneThanks#2021-02-1819:27Joe LaneAnd you're using on-prem, correct?#2021-02-1819:27ennYes#2021-02-1819:30Joe Lanepeer api or client api?#2021-02-1819:30ennThis is on a peer#2021-02-1820:25jaretHi @ennhttps://docs.datomic.com/on-prem/clojure/index.html#datomic.api/index-pull does not include :limit. This is implemented in the client-api which is accessible in the latest client-pro release https://docs.datomic.com/on-prem/overview/project-setup.html#client-setup#2021-02-1820:26jaretThe reason for this is at the top level of client in https://docs.datomic.com/client-api/datomic.client.api.html#top:#2021-02-1820:26jaretFunctions that support offset and limit take the following
additional optional keys:
:offset Number of results to omit from the beginning
of the returned data.
:limit Maximum total number of results to return.
Specify -1 for no limit. Defaults to -1 for q
and to 1000 for all other APIs.#2021-02-1820:27jaretI can see how this is confusing in our docs, given the example shows the usage of :limit without the added context above. I will update the docs to reflect that.#2021-02-1820:29jaretI need to also discuss with the team if peer api will ever support index-pull with limit, but as Joe said, you can still take 5 etc.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-02-2216:51bhurlowThanks @jaret this was a confusion point for my team as well#2021-02-1820:30favilaIs the pull realized by advancing the outer seq, or only by reading each entry? E.g. if we go (drop 100 result-of-index-pull), does that do the work of 100 pulls or 0?#2021-02-1820:35favila(I’m trying to discern if drop is an exact workalike to :offset or potentially much more expensive in the peer api)#2021-02-1820:37jaretMy understanding is it does the work of 100 pulls. But I need to validate that understanding and am running that by Stu.#2021-02-1820:45Joe Lane@enn Ideally when implementing a pagination api though, you wouldn't use offset like that. Rather, you would grab the last element of the prior page and use that in the value position of :start, or in your case, :group-id.#2021-02-1821:02favilaWhat you’re suggesting would be more complex than this in the general case. You would have to retain the last pull of the page, transform it back into the next :start vector (which may have grown longer if e.g. a group spans multiple pages), serialize that as a cursor for the client, then rehydrate it when it comes back and know to skip the first result if it is an exact match for :start. I can definitely see not wanting to take all that on in an initial implementation. It also makes it difficult to have a non-opaque cursor--a client may indeed want to skip 100 items or pipeline multiple fetches and be ok with the potentially inconsistent read.#2021-02-1821:02favilaIOW simple offset and limit still has its uses#2021-02-1820:46Joe Lane@favila ^^#2021-02-1820:46Lennart Buit“Cursor based pagination” is the concept Joe Lane is referring to :)!#2021-02-1820:48Lennart Buitpretty mind blown when I first saw it in GraphQL land, pretty cool actually!#2021-02-1820:46favilaSure, ideally. But obviously it’s important enough that the client api has :limit and :offset 🙂#2021-02-1821:15enn@jaret thanks for the clarification, I appreciate it. If you hear anything back on whether this will be supported in the future I’d love to hear.#2021-02-2205:29steveb8nQ: I am using Cloud and provide an mullti-tenant application to a global audience. I want to minimise latency by letting customers use an AWS region close to them. I plan to support 3 regions for this but I currently have 1. I am wondering how I could allow customers to move regions i.e. replicate all their single-tenant data from one Datomic db to another (with the same schema) in a different region#2021-02-2205:29steveb8nI’m curious to hear how other people have dealt with a requirement like this.#2021-02-2205:30steveb8nAny gotchas or tips would be much appreciates. Or even just suggestions so that I am aware of options I haven’t thought of by my lonesome#2021-02-2205:34steveb8nThe first ideas I had are full replication or on-demand replication at the app level i.e. I build my own export/diff tool. Or is it replication based on mutation events replayed in the 2 other dbs?#2021-02-2211:33Andrey PopeloWe would like to clone a database within a single storage (on-prem). Unfortunately backup-restore can’t do that for us due to it’s limitations [1]:
> Backup and restore are not suitable for cloning of a database within a single storage. If you attempt to restore a database into a storage that already contains that database, but under a different name, the restore operation will fail.
Are there any other existing ways to do that? Thank you.
[1] https://docs.datomic.com/on-prem/operation/backup.html#limitations#2021-02-2215:06thumbnailWe've experimented with replaying the reflog on a separate database, and even duplicating the underlying storage in order to duplicate/back it up. Both were a pain so we now use backup to a separate install, which is rather painless#2021-02-2220:34JohnJCreate a new table (dynamo or SQL) to restore to#2021-02-2410:16Andrey PopeloThanks#2021-02-2312:43donavanHi, I’m new to Datomic and am looking for some direction in where to learn more about what I’m trying to achieve. I’m trying to ‘mix’ a query and a pull expression in a way that I suspect pull doesn’t allow for. If that is the case is there a more verbose way of achieving what I’m trying to do.
I’m trying to do a ‘filter’ at multiple levels of the return result. In other words I’m trying to return things that have at least one sub-thing that itself has an-attr that is "foobar" and only the sub-things that match the sub-thing clause.
'[:find (pull ?thing [:things/field-1 :things/field-etc {:things/sub-things [*]}])
:in $
:where
[?thing :things/sub-things ?sub-things]
[?sub-things :sub-thing/an-attr "foobar"]]
As far as I understand the pull crawls from ?thing after the query has run so it makes sense in the above query I get all the sub-things and not just the ones that match. But I would really like to constrain the pull expression result somewhat.
My real case is more complicated than that and needs to solve the general case. As for solutions I could post process the results of the pull or I could not use pull but then the problem becomes how to construct the tree results using a raw :find clause. If the latter is at all possible (it doesn’t seem like it) is there any resources someone could point me at? The find spec docs seem to me to not support this. A third possible option would be to use nested-queries.
Sorry for the vague question! Any advice or pointers would be much appreciated 🙂e#2021-02-2314:05Joe LaneYou can include two pulls in the find clause.
You can also do a reverse pull from sub-thing to thing for only matching sub-things. #2021-02-2314:05Joe LaneI’m sure there are more ways to accomplish this too. #2021-02-2314:47donavanThe reverse pull is a good pointer thanks! The problem is that both of those require post processing to piece the original tree back together because I have an arbitrary amount of these filters to accommodate.
What I really need (and I’m pretty sure I understand why it doesn’t work that way) is to be able to unify (I think that’s the right word) the logic variables into the pull expression#2021-02-2314:49Joe LaneYou can't unify lvars in the pull pattern of the pull expression (interesting idea though!)#2021-02-2314:52Joe LaneYou can always issue pulls after the query returns. If your code is in a peer or an ion it's already resident in the db process anyways so those pulls will be just as fast as what d/q does.#2021-02-2315:12donavanThanks… that’s a good point about subsequent queries being fast as it’s not on a server somewhere else; I need to let that really sink in properly#2021-02-2313:30fmnoiseHi everyone, maybe stupid question, but can I use more recent version of datomic api (eg 1.0.6165) with less recent version of datomic transactor (eg 0.9.6045)? The reasoning behind that is I'd like to try qseq and other added perks without upgrading infrastructure.#2021-02-2313:31fmnoiseJust to mention - I tried and it works, just curious if there are any downsides with such approach#2021-02-2317:12jaretWe always recommend upgrading both (as a member of the Datomic team) 🙂. But while the configuration is not expressly supported all versions are backwards compatible and cross versions work between peer and transactor. If you encounter any particular issue do let me know by shooting an e-mail to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>#2021-02-2317:31fmnoisethanks @U1QJACBUM#2021-02-2317:32jaretThe caveat to this is there are some extremely old versions that would have some issues with local storage and potentially the introduction of versions spanning schema changes (i.e. the addition of tuples) may encounter some issues.#2021-02-2316:20Caseyhey folks, is datomic console available with dev-local/cognitect dev tools? Or only for starter/pro/enterprise users?#2021-02-2317:15jaretDatomic Console is shipped with Datomic on-prem. Not shipped with cognitect dev tools. Dev tools includes REBL and dev-local.#2021-02-2317:15jaretIf you want to Download Console you can do so from my.datomic: https://my.datomic.com/downloads/console#2021-02-2319:27CaseyThat's great, thanks for the tip.#2021-02-2413:28prnc@U1QJACBUM console is not currently available for cloud, is that right?#2021-02-2413:28jaretCorrect.#2021-02-2413:28prncok, thanks!#2021-02-2317:51jarethttps://forum.datomic.com/t/datomic-cloud-772-9034/1781#2021-02-2317:53jarethttps://forum.datomic.com/t/ion-dev-0-9-282-and-ion-0-9-50/1782#2021-02-2415:45pinkfrogI am going to store blockchain block informations into datomic. From time to time, I would query the total transaction count in the most recent H blocks. So this is querying against a window of H blocks. How can I perform that with datomic *efficiently ?#2021-02-2415:55jjttjjis there a reason not to just store a transaction count value with the block entity?#2021-02-2415:55jjttjjOr are you asking how to do the moving window aggregation (rolling sum of last H blocks)#2021-02-2415:58pinkfrogYes. Sliding window as normally done in Flink. Wonder how to best achieve that w/ datomic.#2021-02-2420:15bmaddyI'd like to monitor the time spent in d/q (specifically, to report it to New Relic). I could just put a wrapper function around d/q and hope developers remember to call that instead of calling d/q directly, but is there a better way? Ideally, I'd do something when setting up the connection to ensure all queries get monitored. Is there a protocol I could extend or something?#2021-02-2420:32Lennart BuitOn client at least, the datomic api is just a set of functions invoking protocol functions on your connection/db value. So what you can do is kinda proxy them: create a deftype for say a db value that forwards all calls to an ‘underlying’ datomic db value. In these wrappers you can then add logic like this#2021-02-2420:34Lennart BuitThe benefit is that you only have to control where the db or connection value comes from, the functions that get it are oblivious to that these values are ‘decorated’ like this#2021-02-2420:39bmaddyYeah, that's the initial route I was looking down, but I couldn't figure out how d/q used the db value. Specifically, I didn't see a q method or similar on there which suggested to me that member functions would be called many times on a single query, so I wasn't able to figure out how I'd know when a query starts and ends. Have you tried doing this?#2021-02-2420:40bmaddyI guess I could wrap a db value and record which methods are called to maybe guess how it's used.#2021-02-2420:40kenny@U067Q76EP take a look at https://github.com/ComputeSoftware/datomic-client-memdb/blob/master/src/compute/datomic_client_memdb/core.clj{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-02-2420:41kennySpecifically LocalDb#2021-02-2420:42souenzzo@U067Q76EP I did a small lib to ensure that every transact get audited (some extra tx-data on every transaction in a conn).
You can do somethign like that, but to ensure that every (d/db) in your conn will return the "db-with-spy-methods"
https://github.com/molequedeideias/conncat#2021-02-2420:47kennyBe careful with that. Depending on your system, it can create lots of spurious transactions if you’re not checking for empty first. #2021-02-2420:52bmaddy@U2J4FRT2T, yeah, exactly what I was thinking in terms of wrapping the connection. Nice to see that it works. Thanks!
@U083D6HK9 Nice, it looks like client-impl/Queryable has a q function right there on it that you can implement.
Does anyone konw if there's a similar interface for a peer (I should have mentioned in my question that we're using a peer)?#2021-02-2513:45babardosimple question, what datomic.client.api/delete-database does in Datomic Cloud ?
• is the operation revertable?
• do we still keep past transactions in s3?#2021-02-2514:29kennyI don’t believe it is reversible. https://ask.datomic.com/index.php/550/does-deleting-database-permanently-remove-stored-database#2021-02-2514:30babardooh I missed that thx 🙏 I guess we have the same behavior for datomic cloud#2021-02-2514:13donavanWhat’s the easiest way to insert ref entities; I’d imagine doing something like this but it doesn’t work:
(d/transact
conn
{:tx-data
'(#:sub-thing{:some-field "some-data"
:_thing/sub-things [:thing/id "some-application-level-id"]})})#2021-02-2514:19donavanI guess this is it
(d/transact
conn
{:tx-data
[{:thing/id "some-application-level-id"
:thing/sub-things '(#:sub-thing{:some-field "some-data"})}]})#2021-02-2515:31souenzzoHey @U0VP19K6K
As far I know, you can always use []
'() are just more complex to write
Here your examples expanded
;; first try
{:tx-data [#:sub-thing{:some-field "some-data"
:_thing/sub-things [:thing/id "some-application-level-id"]}]}
=> {:tx-data [{:sub-thing/some-field "some-data"
:_thing/sub-things [:thing/id "some-application-level-id"]}]}
;; Second try
{:tx-data [{:thing/id "some-application-level-id"
:thing/sub-things [#:sub-thing{:some-field "some-data"}]}]}
=> {:tx-data [{:thing/id "some-application-level-id"
:thing/sub-things [{:sub-thing/some-field "some-data"}]}]}#2021-02-2515:32souenzzoIn the first case, I think that you miswrite the "reverse reference"
I think that you tryied to write this:
{:tx-data [#:sub-thing{:some-field "some-data"
:_sub-things [:thing/id "some-application-level-id"]}]}
=> {:tx-data [{:sub-thing/some-field "some-data"
:sub-thing/_sub-things [:thing/id "some-application-level-id"]}]}
#2021-02-2515:33donavanThanks, yeah I just copied that from a much larger tree that I was generating programatically, I don’t normally write manual lists 😄{:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 3")}
#2021-02-2515:34donavanGood spot re. the reverse reference…#2021-02-2515:34donavanAgain, mixing up data that was from repl output with hand written modifications#2021-02-2515:35donavanSo are the reverse references valid in tx-data entities like that? (I will test later regardless)#2021-02-2515:43souenzzoI think that yes, but I already had some issues using lookup references in some places. Sometimes it feels intuitive to use them, but in pratice you need to resolve and use "the db id"
For example I had issues with [:db/cas e a v0 v1] when v0 is an entity, I tryied to use lookup ref and datomic do not accept its
IMHO it's a bug from datomic and not sure if in newer versions it's fixed.
lookup reference - when you do [:an-unique-attr "it's value"]#2021-02-2515:46donavanAh cool, thanks for the info! 🙂
I didn’t know if it was possible so just reverted to the second approach above without thinking too hard about it.#2021-02-2515:49souenzzoi reported that cas bug in a portal that i think that do not exists anymore 😅#2021-02-2519:09xcenoDoes anyone know if the https://github.com/uncomplicate/neanderthal stack (specifically the intel-MKL dependency `[org.bytedeco/mkl-platform-redist "2020.3-1.5.4"]`) "just works" on a datomic ion project, or would I have to tinker with the EC2 Compute Instances/Templates?#2021-02-2519:16Joe Laneintel-MKL uses AVX operations. Just make sure you're using an instance that supports that. Please don't "tinker" with the instance templates.#2021-02-2519:40xcenoAlright thanks! I asked because i specifically don't want to mess with the templates, so no worries! 😉#2021-02-2519:45Joe LaneFWIW, this isn't a green light expecting everything to work perfectly out of the box. I think you should try it and report back.#2021-02-2519:47xcenoGot it! I'll just try and see how it goes then#2021-02-2603:42onetomhow can we query the name of the datomic system from ion code?#2021-02-2613:54jaretYou can use ion/get-app-info to return the :app-name#2021-02-2613:54jarethttps://docs.datomic.com/cloud/ions/ions-reference.html#get-app-info#2021-02-2616:16hadilsIs it possible to go from a datomic cloud split stack to a solo stack? We would want to upgrade in the future, but have financial obstacles at the moment. #2021-02-2620:42jaretHadil, Solo and Prod are both split stacks. The split is in "Compute" and "Storage" stacks. I believe you are talking about the master template you get from Marketplace. I recommend moving directly to split solo stack or skipping the master template altogether by subscribing and launching your stacks. You can upgrade from a SOLO Split stack to a production stack easily by upgrading your compute. We do not however recommend downgrading from Production to Solo without talking to us first in support. Downgrading from prod to solo is not currently supported.#2021-02-2620:42jaretIf you have specific questions about this shoot me an e-mail at <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> and we can have a quick call to discuss or I can log a support case to track your specific needs 🙂#2021-03-0112:11Pragyan Tripathiis there any really good sample project to learn datomic. I am trying to spin a simple server with crud operations to datomic.#2021-03-0112:12raspasovHave you gone through http://learndatalogtoday.org ?#2021-03-0112:17Pragyan TripathiYes… but not completely.. I am familiar with basic datomic concepts and datalog queries.
I was looking for some sample project to understand how does everything come together to build an application….#2021-03-0112:42danierouxhttps://github.com/pedestal/pedestal-ions-sample I found somewhat useful{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 6")}
#2021-03-0113:29jjttjjI haven't looked at it extensively, but https://github.com/clojureverse/clojurians-log-app is a real production app using datomic{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-03-0114:21Pragyan TripathiThanks a ton for these projects.. it helps 🙂#2021-03-0122:59mikejcusackIn the case of a personal finance management system in Datomic, does it make more sense to have the current value of an account determined dynamically by reducing the transactions or to create a transaction function that on every transaction it walks through each account involved in the transaction and update an amount attribute for the respective account?#2021-03-0123:56favilaNeither one? You shouldn’t have to use history to determine the account balance#2021-03-0200:41mikejcusackThe latter option isn't using history.#2021-03-0202:37emIn the context of a more event-sourced model the former can make a lot of sense with multiple streams, out-of-sync arrivals, various sources of truth etc. but in this case I think the tx-function approach is totally fine. Especially considering that the scale seems small, and Datomic having single global write transactor. Then again if the scale is small enough reducing on every lookup might not be that expensive either.#2021-03-0202:46mikejcusackThat's about what I was thinking. I don't think the scale would be large enough for reducing to be a problem. Especially since the query would filter down to the relevant transactions first.#2021-03-0203:46steveb8nQ: the on-prem api supported running datalog queries against regular vectors/maps (instead of the db). 1/ where are those samples? 2/ is this also possible using the client/cloud API?#2021-03-0203:47steveb8nbackground: I need a matching api and I’d like to try using datalog instead of core.match#2021-03-0218:49Lennart BuitIn peer you can supply a vector of vectors. You can’t in client#2021-03-0322:35steveb8nthanks. I thought that was the case. shame but core.match will work too#2021-03-0207:47henrikIs there a way to parameterize the pull expression when used in a query? I.e, something like
{:find [(pull ?item ?pull-fields)] … }
(This example does not work)#2021-03-0209:20tatutyou can have patterns as :in parameters without the ?#2021-03-0210:23henrikSo I can, thank you. So the ? is magic, I just thought it was a part of the symbol name.#2021-03-0218:54jarethttps://forum.datomic.com/t/datomic-cloud-781-9041/1792#2021-03-0300:09emLooks fantastic, thanks for all the hard work! Just to clarify, by Lambda runtimes moving to NodeJS, does that mean that lambda the ultimate proxies are now entirely on node? I.e., no more JVM 2s cold starts#2021-03-0310:14xcenoSince datomic cloud/ions run on java 1.8, would I need to spin up a separate EC2 instance in my datomic VPC when I need to use a newer version, or can we somehow upgrade/change the JDK for a specific ion?
Also, generally speaking when I need certain system dependencies in an Ion, like e.g. Python 3.8 or whatever, is there any way to get this stuff set up inside an ion? I couldn't find any answers to those questions in the forum or anywhere else#2021-03-0406:25em+1 on this, I would be curious about solutions/answers here too#2021-03-0518:32xcenoWell, if you ever find an answer please let me know#2021-03-0523:12emBeen digging through the docs all day but couldn't find anything more specific pertinent to this. I'm thinking maybe this is something you'd need to do outside Datomic, on configuring the EC2 instances directly? (for python etc., I don't think we can change JDK for ion applications). Not sure if that'd work.
That said, if you're doing heavy compute through python libs etc. I'd imagine you wouldn't really want to run those nodes inside Ions anyway, as the overhead of maintaining query group cache, being part of the High Availability fall back group, etc. is probably not desirable. It would make sense just to spin up your custom compute heavy nodes inside the VPC and access Datomic as a client.#2021-03-0316:18thumbnailDatomic Analytics is really cool! Is it possible to expose created-at/updated-at like attributes on the tables?#2021-03-0316:20thumbnailIn our systems we use the transaction log to show them in the frontend. which is 👌:skin-tone-2:👌:skin-tone-2:#2021-03-0319:40Joe LaneWhat would either of those the times represent?#2021-03-0320:35thumbnailSome information about transactions on the entity. I.e. the most and least recent transaction instant would be most interesting for us#2021-03-0320:41Joe LaneWhat if one of those transaction were modifying an attribute that you're not returning from the query?#2021-03-0320:41Joe LaneDoes "lastmodified" even make sense at an entity level?#2021-03-0320:56thumbnailValid point. In our usecase we mainly need created at. We want to use it to see influx of users over time for example#2021-03-0321:29favilawhat does it mean to “create” an entity? it’s just a number#2021-03-0321:30favila“txInstant of first assertion on this entity id?”#2021-03-0321:39thumbnailYes exactly.#2021-03-0322:48favilabut you see how that’s not an obviously universal meaning of “created_at”? A specific domain may have a different one#2021-03-0406:37thumbnailYes, of course. But Im looking for a way to expose any transaction related information (in my case to expose some definition of created at). As it stands now there doesn't seem a way to expose transaction data#2021-03-0410:59thumbnailA possibility for us would be to expose multiple presto endpoints on different d/as-of . That'd also ensure a consistent view of the data.#2021-03-0414:31favilaHave you considered adding a “creating-tx” and “last-modifying-tx” attribute to your entities that references a transaction? That would be both faster (no history index needed to determine it) and exposable via pendo{:tag :div, :attrs {:class "message-reaction", :title "thinking_face"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 3")}
#2021-03-0414:32favilaand would still tie these times precisely to transactions and their txinstant#2021-03-0414:32Joe LaneThat's clever @U09R86PA4#2021-03-0414:34Joe LaneThose attributes could be applied generally too, no need to make :user/creating-tx .#2021-03-0415:12thumbnailThatd allow for indexing too. Sounds promising. I'll suggest it to the team. Just need to make sure to include that attribute in related transactions (but that could be generic too I recon)... Thanks!#2021-03-0419:29esp1I'm having trouble installing the ion-dev tools as per https://docs.datomic.com/cloud/operation/howto.html#ion-dev. It looks like it's not able to retrieve the ion maven artifacts. When I run clj in the root of the ion-starter project, I get:
➜ ion-starter git:(master) ✗ clj
Downloading: com/datomic/ion/0.9.50/ion-0.9.50.pom from datomic-cloud
Downloading: com/datomic/ion/0.9.50/ion-0.9.50.jar from datomic-cloud
Error building classpath. Could not find artifact com.datomic:ion:jar:0.9.50 in central ()
My $HOME/.clojure/deps.edn looks like:
{
:aliases {
:ion-dev {:deps {com.datomic/ion-dev {:mvn/version "0.9.282"}}
:main-opts ["-m" "datomic.ion.dev"]}
}
:mvn/repos {
"datomic-cloud" {:url ""}
}
}
What am I doing wrong?#2021-03-0419:38mikejcusackhttps://docs.datomic.com/cloud/operation/howto.html#aws-access-keys#2021-03-0419:50Robert A. Randolphhttps://clojure.org/reference/deps_and_cli#_maven_s3_repos
This will provide further information for how to properly source credentials depending on which/where.#2021-03-0420:24mikejcusackCan you try clj -M:ion-dev from your actual project?#2021-03-0420:25esp1@U01NYKKE69G that gives me the same error..#2021-03-0420:26esp1My understanding from the instructions was that the only role I needed to add was the Datomic Administrator Policy, but when I look in that policy it doesn't provide access to the bucket. Should it?#2021-03-0420:26mikejcusackThe bucket isn't yours#2021-03-0420:27esp1That's true. I'm wondering how I'm supposed to get permission to read from it?#2021-03-0420:31mikejcusackIs that your full deps.edn or just a snippet?#2021-03-0420:31mikejcusackAnd you can aws s3 ls ?#2021-03-0420:32esp1that's the full deps.edn, with comments elided. i can run aws s3 ls and it shows me my own buckets.#2021-03-0420:34mikejcusackTry clj -M:ion-dev in a new repl. The error above isn't even for ion-dev, but ion.#2021-03-0420:35esp1I tried it in a blank directory w/no deps.edn and I get the same error#2021-03-0420:36mikejcusackRunning the command I provided?#2021-03-0420:36esp1well, not exactly the same. i get this:
WARNING: Use of :deps in aliases is deprecated - use :replace-deps instead
Downloading: com/datomic/ion-dev/0.9.282/ion-dev-0.9.282.pom from datomic-cloud
Downloading: com/datomic/ion-dev/0.9.282/ion-dev-0.9.282.jar from datomic-cloud
Error building classpath. Could not find artifact com.datomic:ion-dev:jar:0.9.282 in central ()
#2021-03-0420:36esp1yes, clj -M:ion-dev#2021-03-0420:36mikejcusackSo that's not the same error, but similar#2021-03-0420:37mikejcusackWhich version of cli tools are you running?#2021-03-0420:37esp1➜ tmp git:(master) clj --version
Clojure CLI version 1.10.2.796
#2021-03-0420:37mikejcusack@U064X3EF3 Does this make any sense to you?#2021-03-0420:38mikejcusackCan you provide the full home deps.edn?#2021-03-0420:39esp1;; The deps.edn file describes the information needed to build a classpath.
;;
;; When using the `clojure` or `clj` script, there are several deps.edn files
;; that are combined:
;; - install-level
;; - user level (this file)
;; - project level (current directory when invoked)
;;
;; For all attributes other than :paths, these config files are merged left to right.
;; Only the last :paths is kept and others are dropped.
{
;; Paths
;; Directories in the current project to include in the classpath
;; :paths ["src"]
;; External dependencies
;; :deps {
;; org.clojure/clojure {:mvn/version "1.9.0"}
;; }
;; Aliases
;; resolve-deps aliases (-R) affect dependency resolution, options:
;; :extra-deps - specifies extra deps to add to :deps
;; :override-deps - specifies a coordinate to use instead of that in :deps
;; :default-deps - specifies a coordinate to use for a lib if one isn't found
;; make-classpath aliases (-C) affect the classpath generation, options:
;; :extra-paths - vector of additional paths to add to the classpath
;; :classpath-overrides - map of lib to path that overrides the result of resolving deps
;; :aliases {
;; :deps {:extra-deps {org.clojure/tools.deps.alpha {:mvn/version "0.5.442"}}}
;; :test {:extra-paths ["test"]}
;; }
:aliases {
:ion-dev {:deps {com.datomic/ion-dev {:mvn/version "0.9.282"}}
:main-opts ["-m" "datomic.ion.dev"]}
;; :new {:extra-deps {seancorfield/clj-new {:mvn/version "1.1.228"}}
;; :main-opts ["-m" "clj-new.create"]}
;; :rebl {:extra-deps {com.cognitect/rebl {:mvn/version "0.9.242"}
;; org.openjfx/javafx-fxml {:mvn/version "15-ea+6"}
;; org.openjfx/javafx-controls {:mvn/version "15-ea+6"}
;; org.openjfx/javafx-swing {:mvn/version "15-ea+6"}
;; org.openjfx/javafx-base {:mvn/version "15-ea+6"}
;; org.openjfx/javafx-web {:mvn/version "15-ea+6"}}
;; :main-opts ["-m" "cognitect.rebl"]}
;; :reveal {:extra-deps {vlaaad/reveal {:mvn/version "1.1.163"}}
;; :ns-default vlaaad.reveal
;; :main-opts ["-m" "vlaaad.reveal" "repl"]
;; :exec-fn repl}
}
;; Provider attributes
:mvn/repos {
;; "central" {:url ""}
;; "clojars" {:url ""}
;; "cognitect-dev-tools" {:url ""}
"datomic-cloud" {:url ""}
}
}#2021-03-0420:42mikejcusackTry renaming that file and try
{:aliases {:ion-dev {:extra-deps {com.datomic/ion-dev {:mvn/version "0.9.282"}}
:main-opts ["-m" "datomic.ion.dev"]}}
:mvn/repos {"datomic-cloud" {:url ""}}}#2021-03-0420:45esp1same error 😓#2021-03-0420:45mikejcusackDid you copy/paste that directly?#2021-03-0420:46esp1yes#2021-03-0420:46mikejcusackHmm, this isn't making sense at all to me#2021-03-0420:47esp1me either.. i really appreciate your taking the time to help tho!{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-03-0420:48mikejcusackDo you have any custom settings in .m2/?#2021-03-0420:48esp1is that s3 url supposed to be public? i.e. if i have any s3 access should i be able to do aws s3 ls #2021-03-0420:49mikejcusackIt should be. That's why the instructions are just to add the dep and repo#2021-03-0420:49esp1oh dang yes i do!#2021-03-0420:50mikejcusackThat command doesn't work for me#2021-03-0420:50esp1that was it! i had an old datomic-cloud key in my ~/.m2/settings.xml - i removed it and now it works!#2021-03-0420:50mikejcusackThere we go#2021-03-0420:51esp1thanks a ton @U01NYKKE69G - i was going crazy 😅{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-03-0420:51mikejcusackThe one for dev-tools is still needed if you use it{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-03-0419:34Alex Miller (Clojure team)It’s not able to download from the Datomic s3 repo#2021-03-0419:35Alex Miller (Clojure team)You need aws creds active that have s3 permissions#2021-03-0420:15esp1Hm, I'm pretty sure I did this already. I added the Datomic Administrator Policy (`datomic-admin-taxman-us-east-2`) policy to the role I'm logged in with and can successfully run datomic client access taxman , but I can't access the maven artifacts with those same creds. Upon closer inspection it looks like the Datomic Administrator Policy doesn't actually provide access to the bucket. Is there another policy somewhere I need to add in order to enable this access?#2021-03-0423:11uwoI'm seeings this warning with exception when calling (d/shutdown true) at the end of a java process's life. It appears to be benign, but I'm curious if there's anything I can fix to prevent it:#2021-03-0423:11uwo#2021-03-0423:14uwoWe're on com.datomic/datomic-pro "0.9.6024" at the moment.#2021-03-0423:47uwoOof -- pretty obvious. I needed to handle shutting down my clojure resources separately. So s/shutdown false no longer results in the warning.#2021-03-0815:45danmDoes Datomic have any sort of known issues around creating a number (~300-400) of connections in a very short space of time? We've got an app that on startup spawns 30-40 threads, each of which is creating connections to pull data from 10 separate databases (the databases are unique per-thread, so only 1 connection per db, but on the same Datomic cluster). We frequently get a load of category interrupted 'Datomic Client Timeout' exceptions on startup, and have to delete/recreate the container, even though the AWS metrics (Datomic Cloud production setup) don't show any particular issues with mem, CPU, etc. Once the app has started it seems to be fine and stable, no timeouts (with transact and q calls calling d/connect each time they run), unless we perform an action that's going to require it to run through and recreate a lot of those connections rapidly again.#2021-03-0815:54ghadiI would put exponential retries with some jitter{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 3")}
#2021-03-0815:54ghadithe exception you receive should be marked with a :cognitect.anomalies/category that should indicate if it's retriable#2021-03-0815:55danmYeah, it's interrupted (namespaced of course)#2021-03-0815:55ghadiyou don't want to destroy a container just because 1/400 connections fail#2021-03-0815:55kennyIt’s likely you’re getting throttled ops. #2021-03-0815:55kenny(Can check in CW dashboard) #2021-03-0815:56danmI was going to do some work to add that, but there has been a bit of pushback from some in the team because recommendations/docs from Cognitect elsewhere recommend a retry on unavailable, but don't mention other categories#2021-03-0815:56danmSo having feedback that it would be a good idea is 👍:skin-tone-2:#2021-03-0815:56ghadiinterrupted, busy and unavailable are the 3 retriable anomalies#2021-03-0815:57ghadihttps://github.com/cognitect-labs/anomalies#the-categories#2021-03-0815:58danm@U083D6HK9 You mean some Datomic internal throttling, or on the DynamoDB? We did used to see a bit of DDB throttling, so we changed from provisioned resource to on demand scaling (basically, no scaling needed but pay per-request), and don't see them any more. Once we have a better idea of longer-term access patterns we'll probably change that back#2021-03-0816:00ghadi@U6SUWNB9N cloud or onprem?#2021-03-0816:04danmCloud#2021-03-0816:06danmWe are going between VPCs though, as the CloudFormation for Datomic cloud sets up its own VPC rather than being able to 'join' an existing one, and we already had an existing one with EKS in etc. We're not currently finding any limits being hit r.e. inter-VPC comms though#2021-03-0816:06ghadiThe cloudwatch dashboard should show Throttled Ops#2021-03-0816:06ghadi(Dashboard for Datomic)#2021-03-0816:07ghadiThis is separate than ddb throttling, but could be caused by ddb throttling#2021-03-0816:07kennyAlso curious if you're pointing all 300-400 to the primary compute group.#2021-03-0816:08danmOh yes, with you. Nothing in the dashboard. Occasional OpsTimeout there too, but no OpsThrottled#2021-03-0816:13danm@U083D6HK9 At the moment, yes. We've not deployed any query groups (right terminology? I'm pretty new to Datomic), so the only instances running in the cluster are the 2x i3.large ones that are part of the standard template.
Our access pattern involves a fair bit of writing. In some cases we're 1:1 read:write. There is a small lean towards q requests on startup as it loads initial state, but that is only maybe 10% above the transact requests, so I wasn't sure that query groups would help.#2021-03-0816:18ghadiplan for exponential retry/backoff on transact, connect, q#2021-03-0816:20danm👍:skin-tone-2: Our next challenge we already know is "how do we make this faster?", but that's a good start. Thank you. And we'll have metrics to know when we do retry#2021-03-0816:44ghadilook into Query Groups to isolate read load{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 3")}
#2021-03-0816:44ghadican scale those independently of the primary compute group#2021-03-0816:46kennyYou can consider pre-scaling a query group prior to app deploy. #2021-03-0908:47heliosHey folks, I want your opinion. I need to put in my datomic schema an ordered list of refs to other entities. Since datomic doesn't support that natively, how would you do that? The difficult part is probably how to make sure the ordering stays consistent. Are composite tuples a good way of modeling that? :thinking_face:#2021-03-0908:49heliosWhat I had in mind was something like suggested here: https://forum.datomic.com/t/handling-ordered-lists/305/5#2021-03-0918:33Braden ShepherdsonI think that discussion is pretty sound. you end up with a "table" of [order, ref] pairs. there's not really a better way to do it and keep it queryable. abusing tuples for a dynamically-sized list is not likely to end well.#2021-03-0909:08thumbnailHey! Our team is using Datomic Analytics for some dashboards. One metric is based on the depth of trees in our system. I noticed the query sometimes breaks, I think in datomic-land.
The result of the query is:
java.sql.SQLException: Query failed (#20210309_090647_00001_tab9g): bytes is negative
#2021-03-0909:09thumbnailWITH RECURSIVE t(lvl, parent, db__id) AS (
SELECT 1, parent, db__id
FROM mydb.node
WHERE parent IS NULL
UNION ALL
SELECT lvl + 1, t.parent, t.db__id
FROM mydb.node
JOIN t
ON mydb.node.parent = t.db__id
)
SELECT * FROM t LIMIT 1;
This is the query#2021-03-0909:09thumbnailThe node table is simply db__id, parent, and name. where parent refers to another node (or nil)#2021-03-0909:10thumbnailIt sometimes happens. So running the query multiple times yields different results. We're currently on datomic 1.0.6222.#2021-03-0916:36Joe Lane@UHJH8MG6S how consistently can you reproduce?
Once you can consistently, can you upgrade to the latest release and then try to repro again?#2021-03-0916:55thumbnailI could not repro consistently. But I noticed an error in my query. Its recurring until OOM. Sometimes returning "out of resource " (expected). Sometimes returning the bytes-error.
I'll upgrade the cluster later this week anyway and see if I can reproduce. #2021-03-0917:01Joe LaneWhat is the size of your analytics gateway and is it pointed at a query group?#2021-03-0917:24thumbnailI'm using on prem, right now it's a test setup with 10GB ram per query and 16GB max object heap. As a reference; when I fixed the query it consumed 1.5GB, the entire datomic DB is under a gig#2021-03-0917:26Joe LaneAhh, I was thinking you were on cloud.#2021-03-0910:18thumbnailSpotted another odity; binding the same attribute multiple times doesn't work (in datomic analytics):
java.sql.SQLException: Query failed (#20210309_101720_00072_tab9g): Symbol name already has assignment "name_20", while adding "name_24"
#2021-03-0910:19thumbnailWITH RECURSIVE t(lvl, parent, db__id, name, root_name) AS (
SELECT 1, r.parent, r.db__id, r.name, r.name
FROM mydb.node AS r
WHERE r.parent IS NULL
UNION ALL
SELECT lvl + 1, c.parent, c.db__id, c.name, root_name
FROM mydb.node as c
JOIN t
ON c.parent = t.db__id
)
SELECT * FROM t ORDER BY t.lvl
Given this query. The need for 2 name-bindings is so the leaf and the root are shown#2021-03-0916:37Joe LaneThis ball is entirely in prestosql’s court. #2021-03-0916:52thumbnailFair enough! It was hard to judge this one. I simply wrapped the value in a array, which did the trick.#2021-03-0910:47danmhttps://docs.datomic.com/cloud/client/client-api.html#timeouts says that timeouts are normally returned as ::anom/unavailable, but we seem to get a lot of ::anom/interrupted exceptions where the text content is Datomic Client Timeout. Is there some bad way of interacting we're likely to be doing that would cause us to get interrupted rather than unavailable?#2021-03-0922:09ghadiBoth are retriable...#2021-03-0911:12pmooserIf I have a to-many ref attribute, is there an easy way to query for that being empty?#2021-03-0911:17pmooserI suppose maybe missing? is the easiest way.{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 3")}
#2021-03-0911:27pmooserI'm not quite sure how to use missing? in an or clause, though, because of the unification requirement. Hmm.#2021-03-0921:42jarethttps://forum.datomic.com/t/datomic-1-0-6269-now-available/1798{:tag :div, :attrs {:class "message-reaction", :title "tada"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🎉")} " 3")}
#2021-03-1004:19emGetting a 404 on https://docs.datomic.com/release-notices.html#2021-03-1014:20jaretThanks! I forgot to update that link to reflect the new doc org{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-03-0922:39Ben HammondHi. I'm trying to follow the Datomic Ions tutorial, but am stuck on the https://docs.datomic.com/cloud/ions/ions-tutorial.html#configure-connection
I'm not sure how to determine the :endpoint :
I have hacked up
:endpoint ""
to look more like my system, but I assume I must have got it wrong
Can I find the correct endpoint in the CloudFormation logs?#2021-03-0922:41Ben Hammonddatomic -r eu-west-1 client access
seems to have started the SSH tunnel without error#2021-03-0922:43Ben HammondAh found it.
It was in the Outputs of the Compute Stack{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-03-0923:03Ben Hammondhmmm how do I specify an aws region in
clojure -A:ion-dev '{:op :push}'#2021-03-0923:05Ben Hammond{:command-failed "{:op :push}",
:causes
({:message
"Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.",
:class SdkClientException})}#2021-03-0923:12Ben Hammonda somewhat low-tech
export AWS_REGION=eu-west-1
does the trick... sure there must be a nicer way though#2021-03-0923:14kenny@U0CCKNZHT See push docs: https://docs.datomic.com/cloud/ions/ions-reference.html#push 🙂#2021-03-0923:14kennyAdd :region "my-region-1"{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-03-0923:19Ben Hammondthis seems an unfortunate error message upon the deploy
{:command-failed
"{:op :deploy, :group vorpal-ion-starter-Compute-A2UL4PZOZHGU, :uname \"benjy-1\"}",
:causes
({:message
"A Lambda function name, which is the concatenation of the query group and the function name specified in the ion-config.edn file must be <= 64 characters.\nThe following names are too long for Lambda: vorpal-ion-starter-Compute-A2UL4PZOZHGU-get-items-by-type-lambda-proxy",
:class RuntimeException})}#2021-03-0923:20Ben HammondI have control of the first 18 chars of that compute group name but not the last 20#2021-03-0923:21kennyYeah: https://docs.datomic.com/cloud/ions/ions-reference.html#lambda-config. I've hit this before too 😞#2021-03-0923:23kennyPerhaps they should add a warning note to the tutorial on this point.#2021-03-0923:23Ben Hammondso the CodeDeployApplicationName should be no more than 11 characters long#2021-03-0923:23Ben Hammonddo I have to tear the whole stack down and recreate it?#2021-03-0923:24Ben Hammondthat really ought to be in the tutorial if so#2021-03-0923:25kennyI don't believe you can change the name without recreating it.#2021-03-0923:25kennyIn https://docs.datomic.com/cloud/operation/planning.html#naming, they recommend keeping names under 24 characters.#2021-03-0923:26Ben Hammondhttps://docs.datomic.com/cloud/getting-started/start-system.html#details#2021-03-0923:27Ben Hammondyeah just saw that.
Ah well, thats a sign to pack it in for the evening#2021-03-0923:27Ben Hammondthanks for your help#2021-03-0923:28kennySure thing. Sorry the result isn’t ideal 😞#2021-03-0923:28Ben Hammondits all about the journey#2021-03-1013:57jaret@U793EL04V sorry about the frustrating start. Our hands are a bit tied on the master stack creation adding UUIDs for the nested stacks. We need to overhaul the tutorial as well, but I would like to make a recommendation for your next stack. Now that you are subscribed from Marketplace start your new system with split stacks. (launch storage and then compute). You can get the tempaltes from our https://docs.datomic.com/cloud/releases.html#current. And you can follow the split stack instructions here: https://docs.datomic.com/cloud/operation/split-stacks.html#howto{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-03-1017:17Ben Hammondok will try that.
thanks#2021-03-1321:14Ben HammondHi @U1QJACBUM.
I followed the split stack instructions, and can access datomic from the REPL.
However I am not able to successfully deploy; the deploy status returns as
{:deploy-status "FAILED", :code-deploy-status "FAILED"}
When I look into the CodeDeploy events I see that is is complaining DownloadBundle
Error code
UnknownError
Script name
Message
Access Denied
I can successfully download the pushed zip file from S3...#2021-03-1321:14Ben Hammondam wondering if this is some side-effect of the split-stack deletion/recreations?#2021-03-1321:15Ben HammondI found https://stackoverflow.com/questions/54342398/how-to-troubleshoot-access-denied-in-code-deploy-for-downloadbundle-stage, just trying to figure how that relates...
if it relates#2021-03-1321:32Ben Hammondoh, the datomic-code-eu-west-1 was permitting ListCodeBucket/ReadCodeBucket to a previous s3 bucket.
thats strange#2021-03-1321:32Ben Hammondwell, updated it and looks like its working#2021-03-1001:40mafcocincoHi. Looking for some guidance one the proper way to use transactions. I’m hoping I can store a current transaction id as a way of maintaining the current version of a production db. I want to allow users of my app to make potential changes to the system against the current transaction id and then, via some “publishing process” (as yet to be defined), allow for the current transaction id to be set to the transaction id of the promoted changes. First, is this a good idea or maybe not? Second, assuming it is, what happens if multiple users create different transactions against the same transaction id and wish to commit them at the same time? I would like to read more about how facts are associated with transaction ids and how Datomic avoids collisions while only committing the facts associated with the specified transaction.#2021-03-1004:24emI'm not sure reifying all those potential changes into the actual transaction history of your shared production db makes a lot of sense, especially as you seem to be implying that these "temporary" transactions batches could change or not be "committed". The problem you are trying to solve is also a little unclear, so the solutions could be quite varied, but for starters what about just doing all "local" changes and decision making in-memory for users using with-db? You get the benefit of seeing what might happen using the full dataset, but only commit when you are satisfied/need to.#2021-03-1004:55mafcocincoYes, that was my intention. Then, when it is decided that the changes are good, they can be committed by replaying the transaction outside of the with-db context?#2021-03-1004:56mafcocincoThat is part of what I’m trying to solve. The other is when the production system actually uses the new changes, ie when it is told to refresh the transaction id that it is using.#2021-03-1012:24thumbnailHi! While using Datomic Analytics (on prem 1.0.6222) we noticed :db.type/instant works as expected for attributes, except :db/txInstant . Yielding a error:
Could not serialize column 'txinstant' of type 'timestamp(3)' at position 1:2
#2021-03-1012:24thumbnailcontext: https://clojurians.slack.com/archives/C03RZMDSH/p1614788315071400#2021-03-1013:46jaret@UHJH8MG6S This is fixed in the latest release. You need to also use an updated presto CLI.{:tag :div, :attrs {:class "message-reaction", :title "ok_hand::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 3")}
#2021-03-1013:47thumbnailThat's awesome! I'm looking to upgrade soon either way. Will report soon.#2021-03-1013:47jaretThe Presto CLI we had to settle on is 348#2021-03-1013:47jaretIt's just before they changed their name to Trino.#2021-03-1013:48jaretDoc'd here https://docs.datomic.com/on-prem/analytics/analytics-cli.html with the link to the proper CLI#2021-03-1013:48jaretThey changed the timestamp type out from under us 😞. Sorry about the trouble.#2021-03-1013:50thumbnailThat should work, Thanks!
> They changed the timestamp type out from under us
😅#2021-03-1220:45thumbnailHappy to report a upgrade fixed these problems. Thanks!#2021-03-1112:31donavanHi, I’m trying to setup a dev-local test fixture. In the fixture I create an in memory db but when I run dev-local/release-db on finally it returns nil and when the next test runs the db still has the previous tests data in it. I’m passing the same :db-name and :system to both the creation and release steps. Am I misunderstanding what release-db does?#2021-03-1112:31donavanI couldn’t find any API docs for dev-local only methods#2021-03-1112:33donavanI can gensym the db-name but I’d rather the memory be released#2021-03-1112:46donavanI’ve just gone with delete-database for now (what is the purpose of release-db then?)#2021-03-1112:49favilaReleasedb releases in-process resources related to the connection#2021-03-1112:50favilaYou would use it in a repl to reset your state and free memory and threads without deleting data; you probably wouldn’t ever use it in a production application. #2021-03-1113:14donavanAh ok, thanks!#2021-03-1120:46raspasovDo you guys know of any graphical UI for browsing Datomic data? (apart from Datomic Console)#2021-03-1120:47raspasovI’m thinking something in the style of Postico https://eggerapps.at/postico/ (for Postgres) or Sequel Pro (http://sequelpro.com) for MySQL#2021-03-1307:22JBHomebase + #datahike are working on one here https://github.com/homebaseio/datalog-console{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-03-1120:48Alex Miller (Clojure team)You can use REBL{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-03-1120:54raspasovGoing to explore REBL, thanks!#2021-03-1120:49Alex Miller (Clojure team)Or if you use Datomic Analytics, lots of sql tools can work with it#2021-03-1201:12kennyDo the datoms in a transaction from a call to d/tx-range have any particular order?#2021-03-1201:45ghadiI don’t think so#2021-03-1201:48ghadiI was writing a transaction splitter for a decanting routine, and I remember explicitly sorting a tx by retractions first, then assertions{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 3")}
#2021-03-1216:51zendevil.ethI’m running the following command:
./datomic client access my-stack-name
But I’m getting the following error:#2021-03-1216:51zendevil.ethExecution error (ExceptionInfo) at datomic.tools.ops.aws/invoke! (aws.clj:83).
AWS Error: Unable to fetch region.
#2021-03-1216:51zendevil.ethHow to fix this?#2021-03-1217:21jaretHi @ps do you have AWS credentials sourced? Datomic utilizes the sourced or specified https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html to operate. The CLI tools allow you to also pass a aws profile https://docs.datomic.com/cloud/operation/cli-tools.html#gen-opts#2021-03-1219:45joshkhto test the performance of a query, i'd like to run it a few times while avoiding any caching effects. is it just a matter of using unique binding names in each execution? https://docs.datomic.com/cloud/query/query-executing.html#query-caching#2021-03-1219:57Joe Lane@joshkh It sounds like you're attempting to test the performance of the query-as-edn->datalog-engine-format calculation. Are you suspecting that is a problem for you?#2021-03-1220:05joshkhthanks for the reply. to be completely transparent i don't know much about the various layers of caching. the problem i'm trying to solve is that i have a handful of queries that perform badly, and better on subsequent executions. while debugging i don't know if my reordering of the constraints is actually helping or if caching is just doing its job.#2021-03-1220:14Joe LaneI think it is highly unlikely that query-caching is what you're seeing. More likely, the data is not in the object-cache / valcache / memcached on the first run and have to be fetched from storage. Upon subsequent runs, the queries have the data in all of those caching layers and therefore is mostly CPU bound (what you want) instead of io bound (not what you want).#2021-03-1220:46joshkhyup, that makes sense. thanks for clarifying. i had a feeling it wasn't as simple as caching query results based on how the query is compiled.#2021-03-1306:29zendevil.ethI trying the following command:
./datomic client access humboi-march-2021 -p <aws-email> <password>
#2021-03-1306:30zendevil.ethBut it isn’t working#2021-03-1306:30zendevil.ethHow do I pass the credentials correctly?#2021-03-1321:34Ben Hammondhow do I delete a previously pushed datomic ion?
so I ran
clojure -A:ion-dev '{:op :push :region "eu-west-1" :uname "ben-test1"}'
how would I subsequently remove that from the S3 bucket?#2021-03-1400:53Joe Lane@ben.hammond If it’s a uname you can just push another commit to overwrite it. {:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-03-1406:58zendevil.ethI tried the following code to create a datomic database and connect to it:
(let [cfg (-> env :datomic-cfg)
client (d/client cfg)]
(do
(d/create-database
client
{:db-name "humboi-march-2021"})
(d/connect client {:db-name "humboi-march-2021"})))
However, I’m getting the following error:
Execution error (ExceptionInfo) at (pro.clj:72).
Invalid connection config: {:server-type :peer-server, :access-key "key-0680cb34675d5fd59", :secret "<ELIDED>", :endpoint "", :validate-hostnames false}
How to fix this?#2021-03-1420:54mikejcusackThat's not a valid access key#2021-03-1407:05zendevil.ethAlso, is the secret the same key that we have in the pem file downloaded when we create the aws key? My secret looks like this:
-----BEGIN RSA PRIVATE KEY-----
utv686t3q48by7q03gr7y...
-----END RSA PRIVATE KEY-----
#2021-03-1410:17jamesmintramHey - so this has probably been asked before - but I cannot find a decent answer. How would someone approach a multi-tenant system with Datomic?#2021-03-1410:17jamesmintramIs there a way to make it fairly safe against errors, for example, adding a tenant attribute to every entity sounds nice and easy - but how would you enforce that every query filters on that key?#2021-03-1410:18jamesmintramie some sort of way of automatically adding that filter to every query/pull.#2021-03-1410:18jamesmintramor is there a different/more idiomatic way of doing this?#2021-03-1413:44Alex Miller (Clojure team)Make separate databases?#2021-03-1414:17jamesmintramWould that scale for a SaaS like application? I know in the Postgres world there is a choice between a schema per tenant vs a shared schema + tenant keys.
I wondered what that looked like with Datomic#2021-03-1414:17jamesmintramSo in the small scale (10s 100s?) or clients - I guess 1 DB per client would work. Would that scale to 1000s with Datomic?#2021-03-1414:44Alex Miller (Clojure team)Probably beyond my ability to answer, certainly people are doing this at the 10s or 100s scale. Maybe ask at https://ask.datomic.com ?#2021-03-1414:53jamesmintramOk, thanks!#2021-03-1414:12Joe Lane@ps are you trying to use datomic cloud? It looks like are using the non-cloud datomic pro client.
#2021-03-1414:12zendevil.eth@lanejo01 yes I’m trying to use datomic cloud#2021-03-1414:13Joe LaneOk. You definitely have the wrong client dependency.
How far have you gotten through the cloud setup docs?#2021-03-1414:15zendevil.eth@lanejo01 I’m not sure what cloud setup docs you’re referring to#2021-03-1414:20zendevil.ethI’m looking at this currently:
https://docs.datomic.com/cloud/getting-started/get-connected.html#2021-03-1414:21zendevil.eththis has the same require as I have:
https://docs.datomic.com/cloud/tutorial/client.html#2021-03-1414:21zendevil.eth(require '[datomic.client.api :as d])
#2021-03-1414:22zendevil.ethActually, I have two datomic dependencies:
[com.datomic/client-cloud "0.8.105"]
[com.datomic/client-pro "0.9.63"]
#2021-03-1414:22Joe LaneGet rid of client-pro#2021-03-1414:23zendevil.eth@lanejo01 Now I’m getting the following error:
Execution error (FileNotFoundException) at datomic.client.api.impl/serialized-require* (impl.clj:16).
Could not locate datomic/client/impl/pro__init.class, datomic/client/impl/pro.clj or datomic/client/impl/pro.cljc on classpath.
user>
#2021-03-1414:25Joe LaneYou’re using lein?
Are you using another dep that depends on client-pro? Can you show all your deps?
#2021-03-1414:26zendevil.ethYes I’m using lein with cider.
:dependencies [[ch.qos.logback/logback-classic "1.2.3"]
[cheshire "5.10.0"]
[clojure.java-time "0.3.2"]
[com.google.guava/guava "27.0.1-jre"]
[com.novemberain/monger "3.1.0" :exclusions [com.google.guava/guava]]
[cprop "0.1.17"]
[expound "0.8.7"]
[funcool/struct "1.4.0"]
[luminus-aleph "0.1.6"]
[luminus-transit "0.1.2"]
[luminus/ring-ttl-session "0.3.3"]
[markdown-clj "1.10.5"]
[metosin/muuntaja "0.6.7"]
[metosin/reitit "0.5.10"]
[metosin/ring-http-response "0.9.1"]
[mount "0.1.16"]
[nrepl "0.8.3"]
[org.clojure/clojure "1.10.1"]
[org.clojure/tools.cli "1.0.194"]
[org.clojure/tools.logging "1.1.0"]
[org.webjars.npm/bulma "0.9.1"]
[org.webjars.npm/material-icons "0.3.1"]
[org.webjars/webjars-locator "0.40"]
[ring-webjars "0.2.0"]
[ring/ring-core "1.8.2"]
[ring/ring-defaults "0.3.2"]
[amazonica "0.3.153"]
[selmer "1.12.31"]
[com.datomic/client-cloud "0.8.105"]
;;[io.rkn/conformity "0.5.4"]
]#2021-03-1414:30Joe LaneI'm... perplexed here. Are you starting a new project or migrating an existing one?
FWIW, conformity depends on client-pro and does not work with client-cloud.#2021-03-1414:33zendevil.ethI have been working on this project for a while, but want to switch dbs, so added the datomic dependency#2021-03-1414:34zendevil.ethI suppose that means migrating?#2021-03-1414:34Joe LaneYup!#2021-03-1414:35zendevil.ethI’m using #luminus and made some changes to it. It asked for a database string, but I used a map instead according to the docs#2021-03-1414:35zendevil.eth(defstate conn
:start (let [cfg (-> env :datomic-cfg)
client (d/client cfg)]
(do
(d/create-database
client
{:db-name "humboi-march-2021"})
(d/connect client {:db-name "humboi-march-2021"})))
:stop (-> conn .release))
#2021-03-1414:35zendevil.ethwhen I start the server (start) I see the error#2021-03-1414:37Joe LaneWhat is the :datomic-cfg in your environment?#2021-03-1414:38zendevil.eth{:server-type :peer-server
:access-key "key-0680cb34675d5fd59"
:secret "-----BEGIN RSA PRIVATE KEY-----
oashfouhnHIOUNHFIOUHNFOIHU...
-----END RSA PRIVATE KEY-----"
:endpoint ""
:validate-hostnames false
}#2021-03-1414:41Joe LaneGreat, we've found our first problem!
You're not running a peer when you use client-cloud, therefore that configuration map should changed.
Before we do that though let's back up a bit.#2021-03-1414:43Joe LaneYou sent me https://docs.datomic.com/cloud/getting-started/get-connected.html, have you already gone through "Start a system" and "Configure Access"?
If so, can you now connect to your access gateway?#2021-03-1414:44zendevil.ethI have started a system but not configured access#2021-03-1414:47Joe LaneOk, you should do that now and, in general, follow these steps in order https://docs.datomic.com/cloud/getting-started/getting-started.html#2021-03-1414:48zendevil.ethI have already done “Allow Inbound Traffic to the Access Gateway”#2021-03-1414:48Joe LaneI saw that yesterday you hit https://clojurians.slack.com/archives/C03RZMDSH/p1615616996195200#2021-03-1414:49zendevil.ethbut is authorizing datomic users necessary? Does one have to authorize oneself?#2021-03-1414:49Joe LaneYes, you need to authorize yourself#2021-03-1414:52zendevil.ethI created a new group and added this policy:
datomic-admin-humboi-march-2021-us-east-1#2021-03-1414:53zendevil.ethbut when I click on “add users to the group”, it’s empty. “No records found”#2021-03-1414:54Joe LaneRefresh the page?#2021-03-1414:55zendevil.ethI tried refreshing. Still nothing. I don’t have any iam users associated with the account. It’s just the root. I think that’s why it doesn’t show any users.#2021-03-1414:57zendevil.ethso I created a user and added it to the group#2021-03-1414:58zendevil.ethso now I have both:
• https://docs.datomic.com/cloud/getting-started/start-system.html
• https://docs.datomic.com/cloud/getting-started/configure-access.html
#2021-03-1415:00zendevil.ethWhen I run:
./datomic client access humboi-march-2021
I get:
Execution error (ExceptionInfo) at datomic.tools.ops.aws/invoke! (aws.clj:83).
AWS Error: Unable to fetch region.
#2021-03-1415:00Joe LaneGreat!#2021-03-1415:02Joe LaneWhat does running aws --version at the terminal return?#2021-03-1415:02zendevil.ethaws-cli/2.1.30 Python/3.8.8 Darwin/20.2.0 exe/x86_64 prompt/off
{:tag :div, :attrs {:class "message-reaction", :title "100"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💯")} " 3")}
#2021-03-1415:03Joe LaneAnd aws sts get-caller-identity#2021-03-1415:04zendevil.ethokay I had to run aws configure and add the credentials#2021-03-1415:05zendevil.ethNow when I run
./datomic client access humboi-march-2021
#2021-03-1415:05Joe LaneDid you add the credentials for the NEW user you just created?#2021-03-1415:05zendevil.ethI get
Execution error at datomic.tools.ops.aws/get-bucket (aws.clj:110).
Error finding S3 bucket for humboi-march-2021
#2021-03-1415:05zendevil.ethyes added the credentials of the newly created user#2021-03-1415:07Joe LaneDid you add the new user as a named profile under ~/.aws/config?#2021-03-1415:07zendevil.ethbtw,
aws sts get-caller-identity
{
“UserId”: “AIDASILWYFNXUHRANK2XQ”,
“Account”: “155404741487",
“Arn”: “arn:aws:iam::155404741487:user/prit”
}#2021-03-1415:08Joe LaneIs prit the name of the root user or the NEW user you just created?#2021-03-1415:08zendevil.ethyes#2021-03-1415:08zendevil.eththe new user#2021-03-1415:09Joe LaneAnd what is the named profile for that new user?#2021-03-1415:11zendevil.ethI don’t know#2021-03-1415:12Joe LaneOk, what are the contents of your ~/.aws/config ?
DO NOT SHARE ~/.aws/credentials!!!#2021-03-1415:13zendevil.eth[default]
region = us-west-1
#2021-03-1415:15Joe LaneLook inside ~/.aws/credentials and you should see similar [bracketed] entries. Do not share the credentials of those entries, but can you type out the [bracketed] profile names?#2021-03-1415:15zendevil.eththere’s only one:
[default]
#2021-03-1415:15zendevil.ethAnd I think it’s the root#2021-03-1415:25Joe LaneOk. Here is what I think happened:
1. you were root with admin privileges.
2. You created a new user named prit in the iam console.
3. You generated CLI access and secret keys for prit. <-- Whether you did this or not is important for the next section.
4. You set up your AWS CLI for the first time, adding the original root credentials under the default profile.
I think you should:
1. Create a new entry in your ~/.aws/credentials for [prit] with the credentials created in step 3 above.
2. Create a matching entry in your ~/.aws/credentials for [prit] with region = us-west-1. I'm assuming here that your datomic cloud system was created in us-west-1, if not let me know.
3. Run ./datomic client access humboi-march-2021 -p prit -r us-west-1 and let me know the output.#2021-03-1415:28zendevil.ethit was created in us-east-1#2021-03-1415:31Joe Laneif you plan to do all of your aws work in us-east-1 you will want to set the region for the prit profile (and likely the default profile) to us-east-1.#2021-03-1415:31Joe LaneChange ./datomic client access humboi-march-2021 -p prit -r us-west-1 to ./datomic client access humboi-march-2021 -p prit -r us-east-1#2021-03-1415:31zendevil.ethokay, I did that. And actually now upon double checking the [default] is already on the newly created iam user#2021-03-1415:31zendevil.eththere was no root#2021-03-1415:31Joe LaneGreat 👍#2021-03-1415:32zendevil.ethSo I changed the region to us-east-1 in config#2021-03-1415:33zendevil.ethbut upon running the command without the -p and -r tags, I get:
Execution error (ExceptionInfo) at datomic.tools.ops.aws/invoke! (aws.clj:83).
AWS Error: Unable to fetch region.
Full report at:
/var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/clojure-8861423583993777330.edn#2021-03-1415:35zendevil.ethwhen I use the -r tag without the -p tag, I get “unable to fetch credentials”#2021-03-1415:35zendevil.ethit seems like the .aws files are being ignored by the aws cli#2021-03-1415:48Joe LaneIf you run env | grep AWS (DO NOT SHARE) do you have values set for AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY?#2021-03-1415:49zendevil.ethenv | grep AWS
returns nothing#2021-03-1415:55zendevil.ethI think you meant
printenv | grep AWS
#2021-03-1415:55zendevil.ethwhich returns nothing#2021-03-1415:56Joe LaneWhat does AWS_PROFILE=prit; AWS_REGION=us-east-1; aws sts get-caller-identity return?#2021-03-1415:59zendevil.eth{
"UserId": "AIDASILWYFNXUHRANK2XQ",
"Account": "155404741487",
"Arn": "arn:aws:iam::155404741487:user/prit"
}#2021-03-1415:59Joe LaneWait, when you said
> but upon running the command without the -p and -r tags, I get:
> Execution error (ExceptionInfo) at datomic.tools.ops.aws/invoke! (aws.clj:83).
> AWS Error: Unable to fetch region.
You said without, and then you said
> when I use the -r tag without the -p tag, I get “unable to fetch credentials”
Did you actually run the command WITH -p and -r?#2021-03-1416:00zendevil.eth@lanejo01 this seems to work:
datomic-cli % ./datomic client access humboi-march-2021 -p default -r us-east-1
#2021-03-1416:01zendevil.ethI get:
OpenSSH_8.1p1, LibreSSL 2.7.3
debug1: Reading configuration data /Users/prikshetsharma/.ssh/config
debug1: /Users/prikshetsharma/.ssh/config line 1: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 47: Applying options for *
debug1: Connecting to 54.91.147.158 [54.91.147.158] port 22.
debug1: Connection established.
debug1: identity file /Users/prikshetsharma/.ssh/datomic-us-east-1-humboi-march-2021-bastion type -1
debug1: identity file /Users/prikshetsharma/.ssh/datomic-us-east-1-humboi-march-2021-bastion-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_8.1
debug1: Remote protocol version 2.0, remote software version OpenSSH_7.4
debug1: match: OpenSSH_7.4 pat OpenSSH_7.0*,OpenSSH_7.1*,OpenSSH_7.2*,OpenSSH_7.3*,OpenSSH_7.4*,OpenSSH_7.5*,OpenSSH_7.6*,OpenSSH_7.7* compat 0x04000002
debug1: Authenticating to 54.91.147.158:22 as 'ec2-user'
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: algorithm: curve25519-sha256
debug1: kex: host key algorithm: ecdsa-sha2-nistp256
debug1: kex: server->client cipher:
#2021-03-1416:01Joe Lane👍#2021-03-1416:03Joe LaneNow, what does curl -x return?#2021-03-1416:03zendevil.eth{:s3-auth-path “humboi-march-2021-storagef7f305e7-1h3lt-s3datomic-1650q253gkqr1”}%#2021-03-1416:03Joe LanePerfect.#2021-03-1416:04Joe Laneit looks like you completed the "cloud setup" section and should proceed to https://docs.datomic.com/cloud/tutorial/client.html#2021-03-1416:08zendevil.ethNow I get a different error upon starting the server:
Execution error (ExceptionInfo) at datomic.client.impl.cloud/get-s3-auth-path (cloud.clj:179).
Unable to connect to localhost:8182
#2021-03-1416:09zendevil.ethI changed datomic-cfg to the following:
{
:server-type :ion
:region "us-east-1" ;; e.g. us-east-1
:system "humboi-march-2021"
:endpoint ""
:proxy-port 8182
}
#2021-03-1416:10zendevil.ethI suppose that this means that the program will automatically look for the credentials in .aws folder#2021-03-1416:10Joe Laneshould be humboi-march-2021 in the :endpoint#2021-03-1416:10zendevil.ethwhat if I’m running it on something like heroku? Do I have to create a .aws folder there too?#2021-03-1416:12zendevil.ethNow I get the following error:
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
Forbidden to read keyfile at . Make sure that your endpoint is correct, and that your ambient AWS credentials allow you to GetObject on the keyfile.
#2021-03-1416:14Joe LaneYou get that error when doing what? I need more context.#2021-03-1416:14zendevil.ethwhen running the server using
(start)
#2021-03-1416:15zendevil.ethwhich I think would run the :start for this:
(defstate conn
:start (let [cfg (-> env :datomic-cfg)
client (d/client cfg)]
(do
(d/create-database
client
{:db-name "humboi-march-2021"})
(d/connect client {:db-name "humboi-march-2021"})))
:stop (-> conn .release))
#2021-03-1416:18zendevil.ethdo I have to attach another policy to the group?#2021-03-1416:22zendevil.ethit also prints a lot of debug logs#2021-03-1416:22Joe LaneI’m not convinced your using your new user. Especially since you’re using default for the access gateway. #2021-03-1416:23zendevil.eth@lanejo01 the default is the new user#2021-03-1416:23zendevil.ethI was wrong when I’d said that the default is root.#2021-03-1416:24zendevil.eththe new user’s access key id on the console is the same as the access key id under [default] in ~/.aws/credentials#2021-03-1416:27Joe LaneAt your repl can you run (System/getenv "AWS_PROFILE")#2021-03-1416:28zendevil.eth@lanejo01 that gives “nil”#2021-03-1416:29Joe LaneHowever you set environment variables, set AWS_PROFILE=default and AWS_REGION=us-east-1 and then try it again?#2021-03-1416:33zendevil.eththat still returns “nil”#2021-03-1416:34zendevil.ethbut I’ve confirmed that the env vars are set with printenv | grep AWS#2021-03-1416:37zendevil.ethafter setting the environment variable, I restarted the repl, and now I’m getting a longer exception:
#2021-03-1416:37zendevil.ethMar 14, 2021 10:05:07 PM com.amazonaws.internal.InstanceMetadataServiceResourceFetcher handleException
WARNING: Fail to retrieve token
com.amazonaws.SdkClientException: Failed to connect to service endpoint:
at com.amazonaws.internal.EC2ResourceFetcher.doReadResource(EC2ResourceFetcher.java:100)
at com.amazonaws.internal.InstanceMetadataServiceResourceFetcher.getToken(InstanceMetadataServiceResourceFetcher.java:91)
at com.amazonaws.internal.InstanceMetadataServiceResourceFetcher.readResource(InstanceMetadataServiceResourceFetcher.java:69)
at com.amazonaws.internal.EC2ResourceFetcher.readResource(EC2ResourceFetcher.java:66)
at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsEndpoint(InstanceMetadataServiceCredentialsFetcher.java:58)
at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsResponse(InstanceMetadataServiceCredentialsFetcher.java:46)
at com.amazonaws.auth.BaseCredentialsFetcher.fetchCredentials(BaseCredentialsFetcher.java:112)
at com.amazonaws.auth.BaseCredentialsFetcher.getCredentials(BaseCredentialsFetcher.java:68)
at com.amazonaws.auth.InstanceProfileCredentialsProvider.getCredentials(InstanceProfileCredentialsProvider.java:165)
at com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper.getCredentials(EC2ContainerCredentialsProviderWrapper.java:75)
at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117)
at com.amazonaws.services.s3.S3CredentialsProviderChain.getCredentials(S3CredentialsProviderChain.java:35)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1257)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:833)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:783)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4247)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4194)
at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1398)
at cognitect.s3_creds.store$read_s3.invokeStatic(store.clj:39)
at cognitect.s3_creds.store$read_s3.invoke(store.clj:36)
at cognitect.s3_creds.store$get_val.invokeStatic(store.clj:72)
at cognitect.s3_creds.store$get_val.invoke(store.clj:65)
at cognitect.s3_creds.store.ReadStoreImpl$fn__37920.invoke(store.clj:127)
at clojure.core.async$thread_call$fn__15992.invoke(async.clj:484)
at clojure.lang.AFn.run(AFn.java:22)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: No route to host (connect failed)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:606)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
at sun.net.www.http.HttpClient.New(HttpClient.java:339)
at sun.net.www.http.HttpClient.New(HttpClient.java:357)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1226)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1205)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1056)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:990)
at com.amazonaws.internal.ConnectionUtils.connectToEndpoint(ConnectionUtils.java:52)
at com.amazonaws.internal.EC2ResourceFetcher.doReadResource(EC2ResourceFetcher.java:80)
... 33 more
Mar 14, 2021 10:05:07 PM com.amazonaws.internal.InstanceMetadataServiceResourceFetcher handleException
WARNING: Fail to retrieve token
com.amazonaws.SdkClientException: Failed to connect to service endpoint:
at com.amazonaws.internal.EC2ResourceFetcher.doReadResource(EC2ResourceFetcher.java:100)
at com.amazonaws.internal.InstanceMetadataServiceResourceFetcher.getToken(InstanceMetadataServiceResourceFetcher.java:91)
at com.amazonaws.internal.InstanceMetadataServiceResourceFetcher.readResource(InstanceMetadataServiceResourceFetcher.java:69)
at com.amazonaws.internal.EC2ResourceFetcher.readResource(EC2ResourceFetcher.java:66)
at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsEndpoint(InstanceMetadataServiceCredentialsFetcher.java:58)
at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsResponse(InstanceMetadataServiceCredentialsFetcher.java:46)
at com.amazonaws.auth.BaseCredentialsFetcher.fetchCredentials(BaseCredentialsFetcher.java:112)
at com.amazonaws.auth.BaseCredentialsFetcher.getCredentials(BaseCredentialsFetcher.java:68)
at com.amazonaws.auth.InstanceProfileCredentialsProvider.getCredentials(InstanceProfileCredentialsProvider.java:165)
at com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper.getCredentials(EC2ContainerCredentialsProviderWrapper.java:75)
at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117)
at com.amazonaws.services.s3.S3CredentialsProviderChain.getCredentials(S3CredentialsProviderChain.java:35)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1257)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1278)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1145)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4247)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4194)
at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1398)
at cognitect.s3_creds.store$read_s3.invokeStatic(store.clj:39)
at cognitect.s3_creds.store$read_s3.invoke(store.clj:36)
at cognitect.s3_creds.store$get_val.invokeStatic(store.clj:72)
at cognitect.s3_creds.store$get_val.invoke(store.clj:65)
at cognitect.s3_creds.store.ReadStoreImpl$fn__37920.invoke(store.clj:127)
at clojure.core.async$thread_call$fn__15992.invoke(async.clj:484)
at clojure.lang.AFn.run(AFn.java:22)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Host is down (connect failed)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:606)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
at sun.net.www.http.HttpClient.New(HttpClient.java:339)
at sun.net.www.http.HttpClient.New(HttpClient.java:357)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1226)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1205)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1056)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:990)
at com.amazonaws.internal.ConnectionUtils.connectToEndpoint(ConnectionUtils.java:52)
at com.amazonaws.internal.EC2ResourceFetcher.doReadResource(EC2ResourceFetcher.java:80)
... 34 more
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
Forbidden to read keyfile at . Make sure that your endpoint is correct, and that your ambient AWS credentials allow you to GetObject on the keyfile.
#2021-03-1416:39Joe LaneAnd from your CLI if you run aws s3 ls you get what response?#2021-03-1416:41zendevil.eth@lanejo01, I get the following:
2021-03-12 21:32:38 146 .keys
#2021-03-1416:44Joe LaneOk. So that tells me your user has the ability to get the keys from s3 and that you are not using that user when you run your application.#2021-03-1416:46Joe Lanelets go back to our /.aws/credentials and /.aws/config.
Create a new profile called [humboi] and copy the [default] credentials section to the new [humboi] profile.#2021-03-1416:47Joe LaneSimilarly for the config file, copy [default] to [humboi] and make sure it's set to use the us-east-1 region.#2021-03-1416:48Joe LaneThen, restart the access gateway with ./datomic client access humboi-march-2021 -p humboi -r us-east-1#2021-03-1416:50Joe LaneAnd then change your client config map to
{
:server-type :ion
:region "us-east-1" ;; e.g. us-east-1
:system "humboi-march-2021"
:creds-profile "humboi"
:endpoint ""
:proxy-port 8182
}#2021-03-1416:51Joe LaneThis configuration will be SPECIFIC TO YOUR MACHINE and other users will need to have a different one. Also, when you deploy this, you will. need to remove that creds profile and use a different approach (we have docs on this).#2021-03-1417:01zendevil.eth@lanejo01 I created [humboi] in config and credentials with the same data as the [default], ran ./datomic client access humboi-march-2021 -p humboi -r us-east-1, added :creds-profile “humboi”, restarted the repl and started the server with (start). However, I get the error:#2021-03-1417:01zendevil.ethExecution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
profile file cannot be null#2021-03-1417:05zendevil.ethalso can you please share the link of the docs with deployment instructions?#2021-03-1417:14Joe LaneI'd like you to:
1. clone this repo https://github.com/Datomic/ion-starter and cd into it
2. change this file in the repo to your config map https://github.com/Datomic/ion-starter/blob/master/resources/datomic/ion/starter/config.edn
3. FROM THE TERMINAL (no lein, no cider, no emacs, etc. Just plain clj) copy-paste the forms in https://github.com/Datomic/ion-starter/blob/master/siderail/tutorial.repl one at a time up to Line 17.
4. Paste the output here.
This is an attempt at a minimal repro to help me understand whether the problem is with your IAM User/role/group or has something to do with your specific project.#2021-03-1417:17zendevil.eth@lanejo01 Upon doing clj, I get:
Error building classpath. Could not find artifact com.datomic:ion:jar:0.9.50 in central (https://repo1.maven.org/maven2/)#2021-03-1417:29Joe LaneCan you give your IAM user S3FullAccess policy for now and try again?#2021-03-1417:32zendevil.eth@lanejo01, after adding S3FullAccess to the group and running clj, I get the following error:
Error building classpath. Could not transfer artifact com.amazonaws:aws-java-sdk-kms:jar:1.11.210 from/to central (): Range Not Satisfiable (416)
#2021-03-1417:36Joe LaneThat's... startling? https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk-kms/1.11.210/#2021-03-1417:39Joe LaneDo you have high network latency?#2021-03-1417:41zendevil.ethi don’t know#2021-03-1417:44Joe LaneCan you update your clojure cli tools to the latest and try again?#2021-03-1417:45Joe LaneI'm not sure why you wouldn't have been able to download that aws jar from maven central ¯\(ツ)/¯#2021-03-1417:49Alex Miller (Clojure team)That error is indicating a bad or unfulfillable maven version range somewhere#2021-03-1417:50Alex Miller (Clojure team)This specific lib error feels familiar#2021-03-1417:50zendevil.ethI’m trying to update clj with brew but get the following error:#2021-03-1417:50zendevil.ethError: Your CLT does not support macOS 11.
It is either outdated or was modified.
Please update your CLT or delete it if no updates are available.
Update them from Software Update in System Preferences or run:
softwareupdate --all --install --force
If that doesn't show you an update run:
sudo rm -rf /Library/Developer/CommandLineTools
sudo xcode-select --install
Alternatively, manually download them from:
.
Error: An exception occurred within a child process:
SystemExit: exit
#2021-03-1417:51Alex Miller (Clojure team)I’m not sure what CLT is, this is outside of Clojure/Datomic#2021-03-1417:51Alex Miller (Clojure team)Xcode command line tools maybe?#2021-03-1417:53Alex Miller (Clojure team)I suspect you’re in for a bit of a yak shave here to update your dev tooling#2021-03-1418:02Alex Miller (Clojure team)@ps what version of the Clojure CLI are you on? clj -Sdescribe should say#2021-03-1418:04FlavaDavetrying to get datomic to run locally but keep getting an error:
.lein/profiles.clj
{:user
{:plugins [[lein-datomic "0.2.0"]]
:datomic {:install-location "/Users/Dave/Projects/datomic-free-0.9.5703.21"}}}
Project.clj
(defproject pet-owners "0.1.0-SNAPSHOT"
:description "FIXME: write description"
:url ""
:license {:name "EPL-2.0 OR GPL-2.0-or-later WITH Classpath-exception-2.0"
:url ""}
:dependencies [[org.clojure/clojure "1.10.1"]
[com.datomic/datomic-free "0.9.5697"]
[expectations "2.0.9"]]
:datomic {:schemas ["resources/datomic" ["schema.edn"]]}
:plugins [[lein-autoexpect "1.9.0"]]
:profiles {:dev
{:datomic {:config "resources/datomic/free-transactor-template.properties"
:db-uri "datomic:
when i run lein datomic start i get an error (will put in thread)#2021-03-1418:05FlavaDaveclojure.lang.Compiler$CompilerException: Syntax error macroexpanding clojure.core/fn at (clojure/core/unify.clj:83:18).
#:clojure.error{:phase :macro-syntax-check, :line 83, :column 18, :source "clojure/core/unify.clj", :symbol clojure.core/fn}
at clojure.lang.Compiler.checkSpecs (Compiler.java:6972)
clojure.lang.Compiler.macroexpand1 (Compiler.java:6988)
clojure.lang.Compiler.analyzeSeq (Compiler.java:7093)
clojure.lang.Compiler.analyze (Compiler.java:6789)
clojure.lang.Compiler.analyzeSeq (Compiler.java:7095)
clojure.lang.Compiler.analyze (Compiler.java:6789)
clojure.lang.Compiler.access$300 (Compiler.java:38)
clojure.lang.Compiler$DefExpr$Parser.parse (Compiler.java:596)
clojure.lang.Compiler.analyzeSeq (Compiler.java:7107)
clojure.lang.Compiler.analyze (Compiler.java:6789)
clojure.lang.Compiler.analyze (Compiler.java:6745)
clojure.lang.Compiler.eval (Compiler.java:7181)
clojure.lang.Compiler.load (Compiler.java:7636)
clojure.lang.RT.loadResourceScript (RT.java:381)
clojure.lang.RT.loadResourceScript (RT.java:372)
clojure.lang.RT.load (RT.java:459)
clojure.lang.RT.load (RT.java:424)
clojure.core$load$fn__6839.invoke (core.clj:6126)
clojure.core$load.invokeStatic (core.clj:6125)
clojure.core$load.doInvoke (core.clj:6109)
clojure.lang.RestFn.invoke (RestFn.java:408)
clojure.core$load_one.invokeStatic (core.clj:5908)
clojure.core$load_one.invoke (core.clj:5903)
clojure.core$load_lib$fn__6780.invoke (core.clj:5948)
clojure.core$load_lib.invokeStatic (core.clj:5947)
clojure.core$load_lib.doInvoke (core.clj:5928)
clojure.lang.RestFn.applyTo (RestFn.java:142)
clojure.core$apply.invokeStatic (core.clj:667)
clojure.core$load_libs.invokeStatic (core.clj:5985)
clojure.core$load_libs.doInvoke (core.clj:5969)
clojure.lang.RestFn.applyTo (RestFn.java:137)
clojure.core$apply.invokeStatic (core.clj:667)
clojure.core$require.invokeStatic (core.clj:6007)
clojure.core$require.doInvoke (core.clj:6007)
clojure.lang.RestFn.invoke (RestFn.java:421)
clojure.core.contracts.impl.transformers$eval739$loading__6721__auto____740.invoke (transformers.clj:1)
clojure.core.contracts.impl.transformers$eval739.invokeStatic (transformers.clj:1)
clojure.core.contracts.impl.transformers$eval739.invoke (transformers.clj:1)
clojure.lang.Compiler.eval (Compiler.java:7177)
clojure.lang.Compiler.eval (Compiler.java:7166)
clojure.lang.Compiler.load (Compiler.java:7636)
clojure.lang.RT.loadResourceScript (RT.java:381)
clojure.lang.RT.loadResourceScript (RT.java:372)
clojure.lang.RT.load (RT.java:459)
clojure.lang.RT.load (RT.java:424)
clojure.core$load$fn__6839.invoke (core.clj:6126)
clojure.core$load.invokeStatic (core.clj:6125)
clojure.core$load.doInvoke (core.clj:6109)
clojure.lang.RestFn.invoke (RestFn.java:408)
clojure.core$load_one.invokeStatic (core.clj:5908)
clojure.core$load_one.invoke (core.clj:5903)
clojure.core$load_lib$fn__6780.invoke (core.clj:5948)
clojure.core$load_lib.invokeStatic (core.clj:5947)
clojure.core$load_lib.doInvoke (core.clj:5928)
clojure.lang.RestFn.applyTo (RestFn.java:142)
clojure.core$apply.invokeStatic (core.clj:667)
clojure.core$load_libs.invokeStatic (core.clj:5985)
clojure.core$load_libs.doInvoke (core.clj:5969)
clojure.lang.RestFn.applyTo (RestFn.java:137)
clojure.core$apply.invokeStatic (core.clj:667)
clojure.core$require.invokeStatic (core.clj:6007)
clojure.core$require.doInvoke (core.clj:6007)
clojure.lang.RestFn.invoke (RestFn.java:408)
leinjacker.defconstrainedfn$eval733$loading__6721__auto____734.invoke (defconstrainedfn.clj:1)
leinjacker.defconstrainedfn$eval733.invokeStatic (defconstrainedfn.clj:1)
leinjacker.defconstrainedfn$eval733.invoke (defconstrainedfn.clj:1)
clojure.lang.Compiler.eval (Compiler.java:7177)
clojure.lang.Compiler.eval (Compiler.java:7166)
clojure.lang.Compiler.load (Compiler.java:7636)
clojure.lang.RT.loadResourceScript (RT.java:381)
clojure.lang.RT.loadResourceScript (RT.java:372)
clojure.lang.RT.load (RT.java:459)
clojure.lang.RT.load (RT.java:424)
clojure.core$load$fn__6839.invoke (core.clj:6126)
clojure.core$load.invokeStatic (core.clj:6125)
clojure.core$load.doInvoke (core.clj:6109)
clojure.lang.RestFn.invoke (RestFn.java:408)
clojure.core$load_one.invokeStatic (core.clj:5908)
clojure.core$load_one.invoke (core.clj:5903)
clojure.core$load_lib$fn__6780.invoke (core.clj:5948)
clojure.core$load_lib.invokeStatic (core.clj:5947)
clojure.core$load_lib.doInvoke (core.clj:5928)
clojure.lang.RestFn.applyTo (RestFn.java:142)
clojure.core$apply.invokeStatic (core.clj:667)
clojure.core$load_libs.invokeStatic (core.clj:5985)
clojure.core$load_libs.doInvoke (core.clj:5969)
clojure.lang.RestFn.applyTo (RestFn.java:137)
clojure.core$apply.invokeStatic (core.clj:669)
clojure.core$use.invokeStatic (core.clj:6093)
clojure.core$use.doInvoke (core.clj:6093)
clojure.lang.RestFn.invoke (RestFn.java:408)
leinjacker.utils$eval725$loading__6721__auto____726.invoke (utils.clj:1)
leinjacker.utils$eval725.invokeStatic (utils.clj:1)
leinjacker.utils$eval725.invoke (utils.clj:1)
clojure.lang.Compiler.eval (Compiler.java:7177)
clojure.lang.Compiler.eval (Compiler.java:7166)
clojure.lang.Compiler.load (Compiler.java:7636)
clojure.lang.RT.loadResourceScript (RT.java:381)
clojure.lang.RT.loadResourceScript (RT.java:372)
clojure.lang.RT.load (RT.java:459)
clojure.lang.RT.load (RT.java:424)
clojure.core$load$fn__6839.invoke (core.clj:6126)
clojure.core$load.invokeStatic (core.clj:6125)
clojure.core$load.doInvoke (core.clj:6109)
clojure.lang.RestFn.invoke (RestFn.java:408)
clojure.core$load_one.invokeStatic (core.clj:5908)
clojure.core$load_one.invoke (core.clj:5903)
clojure.core$load_lib$fn__6780.invoke (core.clj:5948)
clojure.core$load_lib.invokeStatic (core.clj:5947)
clojure.core$load_lib.doInvoke (core.clj:5928)
clojure.lang.RestFn.applyTo (RestFn.java:142)
clojure.core$apply.invokeStatic (core.clj:667)
clojure.core$load_libs.invokeStatic (core.clj:5985)
clojure.core$load_libs.doInvoke (core.clj:5969)
clojure.lang.RestFn.applyTo (RestFn.java:137)
clojure.core$apply.invokeStatic (core.clj:667)
clojure.core$require.invokeStatic (core.clj:6007)
clojure.core$require.doInvoke (core.clj:6007)
clojure.lang.RestFn.invoke (RestFn.java:408)
leinjacker.eval$eval717$loading__6721__auto____718.invoke (eval.clj:1)
leinjacker.eval$eval717.invokeStatic (eval.clj:1)
leinjacker.eval$eval717.invoke (eval.clj:1)
clojure.lang.Compiler.eval (Compiler.java:7177)
clojure.lang.Compiler.eval (Compiler.java:7166)
clojure.lang.Compiler.load (Compiler.java:7636)
clojure.lang.RT.loadResourceScript (RT.java:381)
clojure.lang.RT.loadResourceScript (RT.java:372)
clojure.lang.RT.load (RT.java:459)
clojure.lang.RT.load (RT.java:424)
clojure.core$load$fn__6839.invoke (core.clj:6126)
clojure.core$load.invokeStatic (core.clj:6125)
clojure.core$load.doInvoke (core.clj:6109)
clojure.lang.RestFn.invoke (RestFn.java:408)
clojure.core$load_one.invokeStatic (core.clj:5908)
clojure.core$load_one.invoke (core.clj:5903)
clojure.core$load_lib$fn__6780.invoke (core.clj:5948)
clojure.core$load_lib.invokeStatic (core.clj:5947)
clojure.core$load_lib.doInvoke (core.clj:5928)
clojure.lang.RestFn.applyTo (RestFn.java:142)
clojure.core$apply.invokeStatic (core.clj:667)
clojure.core$load_libs.invokeStatic (core.clj:5985)
clojure.core$load_libs.doInvoke (core.clj:5969)
clojure.lang.RestFn.applyTo (RestFn.java:137)
clojure.core$apply.invokeStatic (core.clj:669)
clojure.core$use.invokeStatic (core.clj:6093)
clojure.core$use.doInvoke (core.clj:6093)
clojure.lang.RestFn.invoke (RestFn.java:408)
leiningen.datomic$eval663$loading__6721__auto____664.invoke (datomic.clj:1)
leiningen.datomic$eval663.invokeStatic (datomic.clj:1)
leiningen.datomic$eval663.invoke (datomic.clj:1)
clojure.lang.Compiler.eval (Compiler.java:7177)
clojure.lang.Compiler.eval (Compiler.java:7166)
clojure.lang.Compiler.load (Compiler.java:7636)
clojure.lang.RT.loadResourceScript (RT.java:381)
clojure.lang.RT.loadResourceScript (RT.java:372)
clojure.lang.RT.load (RT.java:459)
clojure.lang.RT.load (RT.java:424)
clojure.core$load$fn__6839.invoke (core.clj:6126)
clojure.core$load.invokeStatic (core.clj:6125)
clojure.core$load.doInvoke (core.clj:6109)
clojure.lang.RestFn.invoke (RestFn.java:408)
clojure.core$load_one.invokeStatic (core.clj:5908)
clojure.core$load_one.invoke (core.clj:5903)
clojure.core$load_lib$fn__6780.invoke (core.clj:5948)
clojure.core$load_lib.invokeStatic (core.clj:5947)
clojure.core$load_lib.doInvoke (core.clj:5928)
clojure.lang.RestFn.applyTo (RestFn.java:142)
clojure.core$apply.invokeStatic (core.clj:667)
clojure.core$load_libs.invokeStatic (core.clj:5985)
clojure.core$load_libs.doInvoke (core.clj:5969)
clojure.lang.RestFn.applyTo (RestFn.java:137)
clojure.core$apply.invokeStatic (core.clj:667)
clojure.core$require.invokeStatic (core.clj:6007)
clojure.core$require.doInvoke (core.clj:6007)
clojure.lang.RestFn.invoke (RestFn.java:408)
leiningen.core.utils$require_resolve.invokeStatic (utils.clj:102)
leiningen.core.utils$require_resolve.invoke (utils.clj:95)
leiningen.core.utils$require_resolve.invokeStatic (utils.clj:105)
leiningen.core.utils$require_resolve.invoke (utils.clj:95)
leiningen.core.main$lookup_task_var.invokeStatic (main.clj:69)
leiningen.core.main$lookup_task_var.invoke (main.clj:65)
leiningen.core.main$pass_through_help_QMARK_.invokeStatic (main.clj:79)
leiningen.core.main$pass_through_help_QMARK_.invoke (main.clj:73)
leiningen.core.main$task_args.invokeStatic (main.clj:82)
leiningen.core.main$task_args.invoke (main.clj:81)
leiningen.core.main$resolve_and_apply.invokeStatic (main.clj:339)
leiningen.core.main$resolve_and_apply.invoke (main.clj:336)
leiningen.core.main$_main$fn__7420.invoke (main.clj:453)
leiningen.core.main$_main.invokeStatic (main.clj:442)
leiningen.core.main$_main.doInvoke (main.clj:439)
clojure.lang.RestFn.applyTo (RestFn.java:137)
clojure.lang.Var.applyTo (Var.java:705)
clojure.core$apply.invokeStatic (core.clj:665)
clojure.main$main_opt.invokeStatic (main.clj:514)
clojure.main$main_opt.invoke (main.clj:510)
clojure.main$main.invokeStatic (main.clj:664)
clojure.main$main.doInvoke (main.clj:616)
clojure.lang.RestFn.applyTo (RestFn.java:137)
clojure.lang.Var.applyTo (Var.java:705)
clojure.main.main (main.java:40)#2021-03-1418:06FlavaDaveCaused by: clojure.lang.ExceptionInfo: Call to clojure.core/fn did not conform to spec.
#:clojure.spec.alpha{:problems ({:path [:fn-tail :arity-1 :params], :pred clojure.core/vector?, :val clojure.core.unify/var-unify, :via [:clojure.core.specs.alpha/params+body :clojure.core.specs.alpha/param-list :clojure.core.specs.alpha/param-list], :in [0]} {:path [:fn-tail :arity-n], :pred (clojure.core/fn [%] (clojure.core/or (clojure.core/nil? %) (clojure.core/sequential? %))), :val clojure.core.unify/var-unify, :via [:clojure.core.specs.alpha/params+body :clojure.core.specs.alpha/params+body], :in [0]}), :spec #object[clojure.spec.alpha$regex_spec_impl$reify__2509 0x57d7f8ca "#2021-03-1418:09Joe LaneThat looks like a bug in Clojure core unify, unrelated to datomic. #2021-03-1418:23Alex Miller (Clojure team)it is, and was fixed 5 years ago#2021-03-1418:24Alex Miller (Clojure team)so you're getting something old in the stack somehow#2021-03-1418:25Alex Miller (Clojure team)I'd be suspicious of the plugins#2021-03-1418:28Alex Miller (Clojure team)both lein-datomic and lein-autoexpect pull in old versions of core.unify 0.5.3 (was fixed in 0.5.7 in 2016)#2021-03-1419:09FlavaDaveoh i see now. I was doing a follow along with a youtube video i found and wasnt paying attention to the fact that everything he used in the video is super old. I should have checked those first.#2021-03-1419:09FlavaDaveThank you!#2021-03-1512:16jaret@UF41YH1CM What video were you following? Just curious as we're always looking at creating updated similar resources.#2021-03-1516:34FlavaDave@U1QJACBUM
https://www.youtube.com/watch?v=ao7xEwCjrWQ&t=2026s
I gravitated towards this because some of his other videos were very helpful for me. I should have been paying more attention to how old it was though. lol#2021-03-1418:14zendevil.eth@alexmiller the version is 1.10.2.796#2021-03-1418:23Alex Miller (Clojure team)well, that's latest stable so no reason to update that#2021-03-1418:27Joe LaneMaybe a corrupt partial download?#2021-03-1418:29Alex Miller (Clojure team)I'm confused if you're working with lein or working clj and if so which error you're having at this point#2021-03-1418:29zendevil.eththat’s right, after deleting ~/.m2/repository it worked#2021-03-1418:30zendevil.ethI see the repl now#2021-03-1418:31zendevil.eth@lanejo01 in the 6th line of https://github.com/Datomic/ion-starter/blob/master/siderail/tutorial.repl#2021-03-1418:31zendevil.ethI get:#2021-03-1418:31zendevil.ethUnable to connect to localhost:8182#2021-03-1421:02mikejcusackYou have to keep the proxy running while in use. Looks like you killed it since running it.#2021-03-1420:12Joe Lane@ps let’s look into it tomorrow or later today. #2021-03-1505:41zendevil.eth@lanejo01 after starting the proxy, it works and I get the following outputs:
(def client (starter/get-client))
#'user/client
(starter/ensure-sample-dataset)
:loaded
(def conn (starter/get-connection))
#'user/conn
@(def db (d/db conn))
{:t 16, :next-t 17, :db-name "datomic-docs-tutorial", :database-id "b04916cd-b8d8-4d84-b933-90ec6affc30a", :type :datomic.client/db}
(starter/get-schema db)
(#:db{:id 39, :ident :fressian/tag, :valueType :db.type/keyword, :cardinality :db.cardinality/one, :doc "Keyword-valued attribute of a value type that specifies the underlying fressian type used for serialization."} #:db{:id 73, :ident :inv/sku, :valueType :db.type/string, :cardinality :db.cardinality/one, :unique #:db{:id 38, :ident :db.unique/identity}} #:db{:id 74, :ident :inv/color, :valueType :db.type/keyword, :cardinality :db.cardinality/one} #:db{:id 75, :ident :inv/size, :valueType :db.type/keyword, :cardinality :db.cardinality/one} #:db{:id 76, :ident :inv/type, :valueType :db.type/keyword, :cardinality :db.cardinality/one} #:db{:id 77, :ident :order/items, :valueType :db.type/ref, :cardinality :db.cardinality/many, :isComponent true} #:db{:id 78, :ident :item/id, :valueType :db.type/ref, :cardinality :db.cardinality/one} #:db{:id 79, :ident :item/count, :valueType :db.type/long, :cardinality :db.cardinality/one} #:db{:id 80, :ident :inv/count, :valueType :db.type/long, :cardinality :db.cardinality/one})
(starter/get-items-by-type db :shirt '[:inv/sku :inv/color :inv/size])
[[#:inv{:sku "SKU-0", :color :red, :size :small}] [#:inv{:sku "SKU-4", :color :red, :size :medium}] [#:inv{:sku "SKU-8", :color :red, :size :large}] [#:inv{:sku "SKU-12", :color :red, :size :xlarge}] [#:inv{:sku "SKU-16", :color :green, :size :small}] [#:inv{:sku "SKU-20", :color :green, :size :medium}] [#:inv{:sku "SKU-24", :color :green, :size :large}] [#:inv{:sku "SKU-28", :color :green, :size :xlarge}] [#:inv{:sku "SKU-32", :color :blue, :size :small}] [#:inv{:sku "SKU-36", :color :blue, :size :medium}] [#:inv{:sku "SKU-40", :color :blue, :size :large}] [#:inv{:sku "SKU-44", :color :blue, :size :xlarge}] [#:inv{:sku "SKU-48", :color :yellow, :size :small}] [#:inv{:sku "SKU-52", :color :yellow, :size :medium}] [#:inv{:sku "SKU-56", :color :yellow, :size :large}] [#:inv{:sku "SKU-60", :color :yellow, :size :xlarge}]]
user=>
#2021-03-1506:39zendevil.ethSo it seems like the ion-starter works#2021-03-1506:39zendevil.ethBut in my project, when I run the repl, I get this error:
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
profile file cannot be null
#2021-03-1512:27jaret@ps I believe that error indicates that you do not have AWS credentials. Your REPL also needs AWS credentials when launched. And in trying to follow along from your first post... I want to remind you that you will need to remove all the on-prem client usage and replace with cloud usage (i.e. update your deps and confirm all of your connections are using the datomic client Cloud config you used in the ion-starter project). Datomic on prem client will not work in cloud.#2021-03-1512:29jaretIn your ~/.aws/credentials do you have a [default] profile or any profiles configured?#2021-03-1512:31zendevil.eth@jaret yes I have two profiles. [default] and [humboi]. The credentials for both are exactly the same though#2021-03-1512:41jaretand you are ensuring that the profile is used when launching the repl or manually sourcing aws credentials before launching the repl?#2021-03-1512:42zendevil.ethI don’t know how to make sure that that profile is used #2021-03-1512:43jaretHow are you launching the REPL? CLJ tools?#2021-03-1512:43zendevil.ethIn ion-starter yes #2021-03-1512:43zendevil.ethIn my project using lien #2021-03-1512:46jaretDo you have AWS credentials in ~/.lein/profiles.clj? what version of lein are you using? Can you share your complete project.clj redacted of any sensitive information?#2021-03-1513:14zendevil.eth@jaret here’s the project.clj:
(defproject humboiserver "0.1.0-SNAPSHOT"
:description "FIXME: write description"
:url ""
:dependencies [[ch.qos.logback/logback-classic "1.2.3"]
[cheshire "5.10.0"]
[clojure.java-time "0.3.2"]
[com.google.guava/guava "27.0.1-jre"]
[com.novemberain/monger "3.1.0" :exclusions [com.google.guava/guava]]
[cprop "0.1.17"]
[expound "0.8.7"]
[funcool/struct "1.4.0"]
[luminus-aleph "0.1.6"]
[luminus-transit "0.1.2"]
[luminus/ring-ttl-session "0.3.3"]
[markdown-clj "1.10.5"]
[metosin/muuntaja "0.6.7"]
[metosin/reitit "0.5.10"]
[metosin/ring-http-response "0.9.1"]
[mount "0.1.16"]
[nrepl "0.8.3"]
[org.clojure/clojure "1.10.1"]
[org.clojure/tools.cli "1.0.194"]
[org.clojure/tools.logging "1.1.0"]
[org.webjars.npm/bulma "0.9.1"]
[org.webjars.npm/material-icons "0.3.1"]
[org.webjars/webjars-locator "0.40"]
[ring-webjars "0.2.0"]
[ring/ring-core "1.8.2"]
[ring/ring-defaults "0.3.2"]
[amazonica "0.3.153"]
[selmer "1.12.31"]
[com.datomic/client-cloud "0.8.105"]
]
:min-lein-version "2.0.0"
:source-paths ["src/clj"]
:test-paths ["test/clj"]
:resource-paths ["resources"]
:target-path "target/%s/"
:main ^:skip-aot humboiserver.core
:plugins []
:profiles
{:uberjar {:omit-source true
:aot :all
:uberjar-name "humboiserver.jar"
:source-paths ["env/prod/clj" ]
:resource-paths ["env/prod/resources"]}
:dev [:project/dev :profiles/dev]
:test [:project/dev :project/test :profiles/test]
:project/dev {:jvm-opts ["-Dconf=dev-config.edn" ]
:dependencies [[pjstadig/humane-test-output "0.10.0"]
[prone "2020-01-17"]
[ring/ring-devel "1.8.2"]
[ring/ring-mock "0.4.0"]]
:plugins [[com.jakemccrary/lein-test-refresh "0.24.1"]
[jonase/eastwood "0.3.5"]]
:source-paths ["env/dev/clj" ]
:resource-paths ["env/dev/resources"]
:repl-options {:init-ns user
:timeout 120000}
:injections [(require 'pjstadig.humane-test-output)
(pjstadig.humane-test-output/activate!)]}
:project/test {:jvm-opts ["-Dconf=test-config.edn" ]
:resource-paths ["env/test/resources"] }
:profiles/dev {}
:profiles/test {}})#2021-03-1513:14zendevil.ethI don’t use .lein/profiles#2021-03-1513:19jaretand are you just running lein repl or are you passing a particular alias here ^?#2021-03-1513:48zendevil.eth@jaret I’m using cider-jack-in#2021-03-1513:48zendevil.ethI don’t know what an alias is in this context#2021-03-1517:12simongrayI think alias is the deps.edn term. Think they're called profiles in leiningen.#2021-03-1515:24zendevil.eth@jaret is there a way to pass in the credentials from the project.clj?#2021-03-1517:29zendevil.ethor alternatively, is there a way to put the credentials directly in the code rather than depending on ~/.aws?#2021-03-1614:50zendevil.ethhi @jaret @lanejo01 @alexmiller if you can still help me that would be great#2021-03-1614:54jaretApologies @ps but I don't actively use lein. I'll take a look at this today if I get some time. is there a way you can test just running lein repl with your credentials sourced from the project directory to see if we can eliminate the profile null error you are getting via your current method?#2021-03-1614:55zendevil.eth@jaret is there a way that the datomic api accepts access key and secret directly rather than through a profile?#2021-03-1614:55zendevil.eththis would also be helpful to deploy on something like heroku#2021-03-1615:12jaretYou can certainly source credentials or create env vars. But I don't recommend that you hard code a solution here. As Joe and others have mentioned you can supply a creds-profile/creds-provider option in your client config map, but you have to have those things configured. Docs on credentials in AWS: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html. let me know the results of running lein repl from your project and if you can load your project namespaces with credentials sourced.#2021-03-1615:54uwoJust a friendly mention that there's a broken link on "Log API" in this section https://docs.datomic.com/on-prem/best-practices.html#use-log-api{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-03-1618:41joshkhshould i expect any downtime while deploying to a query group with a single node, which is serving a handler via http-direct?#2021-03-1618:42Joe Lane@joshkh Depends on if you set the max size of the QG to 1 vs the desired size.#2021-03-1618:43joshkhdesired size is 1, max size is 4*#2021-03-1618:44Joe LaneThen I believe any disruption should be minimal, not sure about your application specifics.
That being said, why not just change the QG to size 2+ during the deployment and shrink it back down after?#2021-03-1618:48joshkhthat's a possibility for sure. but before i script out the process in CI to spin up a second instance, wait for health checks, deploy and tear down, i'd like to confirm first that a small downtime is expected with a desired capacity 1, minimum query group instances 1, maximum query group instances 4, minimum number of query group instances during (CF) update 1.#2021-03-1618:52joshkhi would have thought the 1/4 split would temporarily scale the group. you have a good point about application specifics, perhaps something is delaying a response past the health check.#2021-03-1618:52Joe LaneYou should test that scenario to see if it matches your needs and expectations. Have you done that already?#2021-03-1618:53joshkhyeah 🙂 asking only because it's something i've been experiencing#2021-03-1621:44esp1Is there a recommended way to provide private access to a Datomic Cloud application without exposing it to the internet? I'm trying to figure out how to give access to a an application I have deployed via Datomic Ions in a Production topology Datomic Cloud VPC to other users in our private AWS network. The Datomic instructions for setting up API Gateway HTTP Direct will route traffic over the external internet, which I'd like to avoid.#2021-03-1717:31Joe LaneHi @U06BTJLTU, we DO support this and have for quite some time.
I'll make a quick playbook for how to do this and send it to you today or tomorrow. We will update our docs accordingly.#2021-03-1718:11esp1Great, thanks @lanejo01!#2021-03-1807:20steveb8nI do this by invoking the Ion Lambda using the AWS API. It was a bit tricky to get the request/response encoding right but, once done, it works great for internal Ion access#2021-03-1812:23Joe LaneHe’s talking about http-direct though which will have much higher performance, especially in the same vpc. #2021-03-1914:01Joe Lane@U06BTJLTU https://docs.datomic.com/cloud/ions/ions-tutorial.html#inside-vpc#2021-03-1914:02Joe LaneLet me know if you run into problems with that.#2021-03-1920:40esp1Thanks @lanejo01! This is helpful, but I am actually interested in getting users that are logged in to our corporate AWS cloud through VPN access to the Datomic VPC - so they would be accessing it from outside the Datomic VPC, but inside our corporate AWS network. The two options I was exploring were:
• Setting up peering/routing to the Datomic VPC directly
• Using a private API Gateway endpoint#2021-03-1920:42esp1I can go the peering/routing way, but that would involve making changes to the Datomic VPC, and I was concerned that if I needed to update Datomic VPC via the CF templates those changes might be lost.#2021-03-1920:44esp1The private API GW endpoint seemed like it would be a solution that could be managed independently from the Datomic CF stacks, but I haven't set up one of these before and have been struggling with how to craft an appropriate resource policy to make it work.#2021-03-1920:56jaretWe should work together on this in a support case ^ but I would recommend an API GW.#2021-03-1920:57jaretCan you throw me an e-mail at <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> and I will get together a recommended policy.#2021-03-1920:57jaretperhaps after I can add this to the docs.#2021-03-1921:30esp1Thanks @jaret, will do#2021-03-1818:57zendevil.ethI’m trying to use datomic cloud and using the
[datomic.client.api :as d]
api for it. This is what I’ve done so far:
Create a client:
(d/client {
:server-type :ion
:region "us-east-1"
:system "humboi-march-2021"
:creds-profile "humboi"
:endpoint ""
:proxy-port 8182
})
And I evidently have the humboi named-profile in my ~/.aws/credentials:
[humboi]
aws_access_key_id = foobar
aws_secret_access_key = foobarbaz
But when I run this:
(d/create-database
client
{:db-name "humboi-march-2021"})
Amongst a lot of logs, I get this exception:
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
profile file cannot be null
Why am I getting this exception? Could it be possible that the database was in face created and that this exception be ignored or is that the app can genuinely not get the aws credentials?
@jaret @lanejo01 @alexmiller thanks for your help so far.#2021-03-1818:59Joe LaneYour client cannot get aws credentials, we proved that everything else works by running with the ion-starter project vs your existing app.#2021-03-1819:01zendevil.eth@lanejo01 if the credentials file can’t be read, can I put the key and the secret in the cfg map directly somehow?#2021-03-1819:01Joe LaneNo, it doesn't work like that.#2021-03-1819:02Joe LaneThis isn't a datomic related exception, see https://www.google.com/search?q=profile+file+cannot+be+null&rlz=1C5CHFA_enUS913US914&oq=profile+file+cannot+be+null&aqs=chrome..69i57j0j0i22i30l4j69i60l2.229j0j7&sourceid=chrome&ie=UTF-8#2021-03-1819:04Joe LaneRandom guess, datomic-cloud depends on a particular version of the AWS SDK, does Amazonica override that version to an incompatible version?#2021-03-1819:10zendevil.eth@lanejo01 I don’t think so. I was using amazonica 0.3.153, which I removed from dependencies, along with the related code. I still see this error upon restarting the server#2021-03-1819:11Joe Lanedid you lein clean?#2021-03-1819:14zendevil.eth@lanejo01, yes#2021-03-1820:05zendevil.eth@lanejo01 is the library supposed to work if only the environment variables AWS_ACCESS_KEY_ID=xxx AWS_SECRET_ACCESS_KEY=yyy are provided? Does it still look for ~/.aws/credentials?#2021-03-1820:07Joe LaneYou already proved that you don't need those set a few days ago. https://clojurians.slack.com/archives/C03RZMDSH/p1615740607282600#2021-03-1820:07Joe LaneWhen ion-starter worked, you didn't set credentials directly at all.#2021-03-1820:09zendevil.eth@lanejo01 you mention this: “Also, when you deploy this, you will. need to remove that creds profile and use a different approach”. What’s the different approach? Maybe I can try it in this case?#2021-03-1820:12Joe LaneI don't think you'd want to deploy this on heroku. EC2 has iam roles baked in, you could use elastic-beanstalk for a heroku-like experience.#2021-03-1820:16zendevil.ethI see, when I’d deploy to elastic-beanstalk, the iam roles will be “baked in”. I’ll look into it. But I’ve really hit a wall with my dev setup and getting the ~/.aws/credentials into the app#2021-03-1902:36zendevil.eth@lanejo01 I was able to deploy on beanstalk, but when I use the following line:
(d/client {
:server-type :ion
:region "us-east-1" ;; e.g. us-east-1
:system "humboi-march-2021"
:creds-profile "humboi"
:endpoint ""
:proxy-port 8182
})
I get “Unable to connect to localhost:8182” in the beanstalk logs#2021-03-1902:38zendevil.ethHere are the logs. Notice line 363.#2021-03-1902:44zendevil.ethI’m guessing that’s because the following command isn’t run on beanstalk:
./datomic client access humboi-march-2021 -p humboi -r us-east-1
But then how can one run this command on beanstalk?#2021-03-1912:32Joe LaneYou need to run beanstalk in the same vpc as your system as a client application https://docs.datomic.com/cloud/operation/client-applications.html
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/vpc.html
Remove the creds-profile and proxy-port entries from your config when running in beanstalk.
You do not need to run the access gateway cli commands when you are in the same vpc. #2021-03-1903:49onetomdatomic solo up <system name> --wait fails with
Upping <system name>-TxAutoScalingGroup-89CAXXFQ591C
Upping <system name>-BastionAutoScalingGroup-128DD1GX31OU7
Waiting for gateway to start.
Execution error (ExceptionInfo) at datomic.tools.ops.process/sh! (process.clj:64).
Shell command failed
it doesn't throw this error without the --wait option though, so at least it's usable.
anyone with the same error or anyone who is using it successfully?
couldn't find an posts on http://forum.datomic.com about this issue, so i will make one eventually.#2021-03-1903:53onetomehhhh, it's shelling out to the aws cli command, which i don't have in my specific environment:
[{:type clojure.lang.ExceptionInfo,
:message "Shell command failed",
:data
{:args
("aws"
"ec2"
"wait"
"instance-running"
"--filters"
"Name=tag:Name,Values=xxx-datomic-system-bastion"
"Name=tag:datomic:system,Values=xxx-datomic-system"
"Name=instance-state-name,Values=running"),#2021-03-1904:02onetomi got this error from the stack trace which was saved into /var/folders/dm/bjgtcwgx7nqfh3flbpq7m0qc0000gn/T/clojure-927549602744154670.edn
such trace file paths are always printed at the end of clojure cli errors, but i've noticed that ppl often forget to look inside them.#2021-03-1907:57tatutanyone using divert-system? I’m not really sure what it means actually… I’ve only used local dev with locally created database in file storage. Now I’m looking into having test envs that have a copy of a cloud database as basis#2021-03-1907:59tatutwhat does divert actually do? do queries copy data from the diverted system#2021-03-1908:04Alex Miller (Clojure team)you use import-cloud to import (a subset of your) data from your cloud system to your local storage, then divert-system will direct queries to be answered via the local storage instead of prod#2021-03-1908:06Alex Miller (Clojure team)see https://docs.datomic.com/cloud/dev-local.html#2021-03-1908:14tatutyeah, I read that page but it wasn’t clear to me what it does… ok so import-cloud is the one I need#2021-03-1917:30mikejcusackIf you are importing an existing db. If you just want to create a local test db divert-system will direct calls to local.#2021-03-1912:32Joe LaneYou need to run beanstalk in the same vpc as your system as a client application https://docs.datomic.com/cloud/operation/client-applications.html
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/vpc.html
Remove the creds-profile and proxy-port entries from your config when running in beanstalk.
You do not need to run the access gateway cli commands when you are in the same vpc. #2021-03-1915:51kennyThe DesiredCapacity knob is a bit strange when deploying a query group. We have our query groups deployed with auto scaling, so the ASG is "managing" the desired count. If I change a parameter (e.g., MaxSize) and have DesiredCapacity set, CloudFormation will actually set the DesiredCapacity to the value passed in. This is pretty nasty, particularly in the situation where you'd be increasing the MaxSize (e.g., DesiredCapacity is set to 2 and MaxCapacity is set to 4. You're currently running at MaxCapacity and need to immediately increase MaxCapacity to meet current demand. You set MaxCapacity to 6 and update the CF script. CF will set MaxCapacity and DesiredCapacity to 2. This makes the high demand situation worse!). The safest workaround seems to be to always have DesiredCapacity set to MaxCapacity. Does the Datomic team have any advice on how to handle a situation like this?
FWIW, we deploy our services onto Fargate via Pulumi. Pulumi has the ability to set "ignoreChanges" on certain properties (e.g., ignoreChanges: ["desiredCount"]). This lets me set an initial desiredCount for my Fargate service and ignore any changes that have occurred since initialization. It would see a similar knob for query groups would solve this issue.#2021-03-1915:53ghadi@kenny are you making use of drift detection?#2021-03-1915:54kennyNo. I've seen it in the console before but never used it and I am not familiar with it. Does it help in this situation?#2021-03-1915:56ghadiyes it helps identify drift between reality and the CF template#2021-03-1915:56ghadiCF manages the ASG which manages the instances#2021-03-1915:56ghadithe ASG is not the head honcho#2021-03-1915:57ghadiI would recommend consulting the drift detection before making manual changes to resources#2021-03-1915:57kennyIn the case of DesiredCapacity, why do I care about "drift"?#2021-03-1915:58kennyA change in DesiredCapacity doesn't seem like drift.#2021-03-1915:58kennyDrift sounds like something unexpected. I fully expect DesiredCapacity to change 🙂#2021-03-1915:59ghadidesired capacity is an ASG parameter#2021-03-1915:59ghadiit's part of the ASG API, right?#2021-03-1916:00ghadiI get that it's the thing most likely to be manually tweaked outside of source control#2021-03-1916:00ghadibbiab#2021-03-1916:00kennyI read https://aws.amazon.com/blogs/aws/new-cloudformation-drift-detection/ on drift detection. It sounds like a heck of a lot of extra work. When updating an ASG, you don't need to set desired capacity again.#2021-03-1916:05kennyDrift detection also doesn't solve the problem. Say I run the drift detection and see the actual ASG desired capacity is different than the capacity set in the CF parameters. What action am I supposed to take? Change my CF DesiredCapacity param update to match the current state? That's not what I actually want. I want to update MaxCapacity and leave DesiredCapacity unchanged. I don't want all future updates to set DesiredCapacity to the value I am forced to set it to for this particular update. Worse, DesiredCapacity could have changed from the time I ran the drift detect and the time I run the CF update.#2021-03-1916:57ghadidrift detection is just a tool that exposes changes made to resources under management by a cloudformation stack
Is there a particular reason you don't want to change Min/Max/Desired via CloudFormation?#2021-03-1916:58ghadijust trying to understand, not suggest anything specific#2021-03-1917:58kennyI am changing the Max via CF. I'm saying, perhaps poorly 🙂, that changing via CF has undesirable side effects.#2021-03-1918:01kennyWith the exception of DesiredCapacity, all query group CF parameters are managed (updated/changed) via a CF update. The DesiredCapacity is only ever controlled by the ASG scaling policy.#2021-03-1918:02Joe LaneSo, is this a feature request?#2021-03-1918:05kennyOr bug report. Was trying to start from the problem to ensure that the problem was actually a problem.#2021-03-1918:08ghadiI thought Desired was part of the cf stack params#2021-03-1918:08ghadiThat would explain our misunderstandings#2021-03-1918:10kennyIt is. #2021-03-1918:11kennyIt’s a required param. #2021-03-1917:34Joe Lane@kenny this is happening in what context?
A datomic version upgrade
A datomic parameter update
Something else?#2021-03-1917:58kennyChanging MaxCount from 4 -> 6.#2021-03-1917:58kennyThe example I gave is the exact thing that happened to us 🙂#2021-03-1918:00Joe LaneWhat does this have to do with datomic cloud? Are you reporting a bug or bemoaning how CF works?#2021-03-1918:01Joe Lane(I'm not saying it doesn't have anything to do with Datomic cloud, I just don't understand in what usage scenario you're running into this.)#2021-03-1918:05kennyI think Datomic's CF implementation leads to undesirable results (though, I am not a CF expert so it could be a problem with CF itself). If DesiredCount were an optional parameter, I think this would not be an issue.#2021-03-1918:09Joe LaneSo you're saying that the QueryGroup CF template always sets the DesiredCapacity and the MaxCapacity equal to the same value?
• When upgrading a stack to a new version?
• When updating the stack for some reason?
• Something else is happening?
#2021-03-1918:09kennyThe problem is that CF is trying to manage the DesiredCount parameter (ensure actual DesiredCount matches the DesiredCount set in the params). This is problematic because the DesiredCount is entirely managed by the ASG. #2021-03-1918:10kennyNot exactly. The problem is that the CF template is always setting the DesiredCount param. #2021-03-1918:12kennyBesides the very first run, I never want CF to touch the desired count parameter. #2021-03-1918:12Joe LaneAnd it shouldn't because....?#2021-03-1918:12Joe LaneOk#2021-03-1918:13Joe LaneAnd so it's "Always setting the DesiredCount param"
• When you're upgrading a version?
• Updating a stack with some new config value (like an env-var)?
• Something else?#2021-03-1918:13kennyThis is the situation:
DesiredCapacity is set to 2 and MaxCapacity is set to 4. You're currently running at MaxCapacity and need to immediately increase MaxCapacity to meet current demand. You set MaxCapacity to 6 and update the CF script. CF will set MaxCapacity and DesiredCapacity to 2. This makes the high demand situation worse#2021-03-1918:13kennyUpdating MaxCount.#2021-03-1918:13kennyI'm not sure on the other situations.#2021-03-1918:14kennyIntuitively I would expect the same behavior (i.e., DesiredCount gets set to the CF param). Would need to test it to verify ofc.#2021-03-1918:16Joe Lane> DesiredCapacity is set to 2 and MaxCapacity is set to 4.
In the ASG or CF?
> You set MaxCapacity to 6 and update the CF script
Again, ASG or CF?#2021-03-1918:18Joe LaneWhat is "CF script"? Are you referring to your scripts or the QG CF template parameters? This is an update right?#2021-03-1918:19kennyOh, sorry. In all cases I mean query group CF template. #2021-03-1918:19kennyAnd yes, an update. #2021-03-1918:22Joe LaneSo, In the template you have the DesiredCount at 2, MaxCount at 4. You're at your max, because you manually changed the ASG DesiredCount to 4 (to match your load).
The problem is that when you update your MaxCount from 4 to 6 but leave the DesiredCount at 2, you think it should keep the 4 you manually set in the ASG console?
Am I close? If I'm wrong, can you copy the above prose and edit it, then paste it back here in this thread?#2021-03-1918:33kennySo, In the template you have the DesiredCount at 2, MaxCount at 4. You're at your max, because the ASG scaling policy scaled up the group DesiredCount to 4 (to match your load).
The problem is that when you update your MaxCount from 4 to 6 via a CF template update but leave the DesiredCount at 2, I think it should keep the 4 the ASG scaling policy set. #2021-03-1918:37Joe LaneWhat is the ASG scaling policy based upon? Why has it scaled up to 4 machines? CPU, mem?#2021-03-1918:38kennyCpu#2021-03-1918:38kennyTarget track 50%. #2021-03-1919:33kennyWorth opening an ask.Datomic topic on this? #2021-03-1919:46em> You set MaxCapacity to 6 and update the CF script. CF will set MaxCapacity and DesiredCapacity to 2. This makes the high demand situation worse
Curious, if you set MaxCapacity to 6, why would CF set MaxCapacity to 2?
The problem makes sense though, as a dynamic parameter DesiredCapacity is modified by ASG to control behavior. But CF template updates want to set it as a default param. Maybe an optional tick box in the template would solve the issue without breaking changes?#2021-03-1919:52jaret@kenny how are you updating? from CFT? CloudFormation does not change your ASG settings when you update or upgrade. It reads your ASG settings. Maybe I am missing something here, but adding machines is going to potentially bounce the process monitoring CPU or lower the CPU average, right?#2021-03-1919:57kennyYes, the objective would be lower cpu. Since I’m not fiddling things in the console directly, there could be something in our deployment process affecting this. Let me create a minimal repro and get back to you in a couple hours.#2021-03-1920:04jaret@kenny what do you have set for your warmup time?#2021-03-1920:04jaret#2021-03-1920:06kenny300#2021-03-1920:06jaretSo the default, then#2021-03-1920:06jaretand did you disable scale in?#2021-03-1920:07kenny#2021-03-1920:10Joe LaneAnd you're using i3.xlarges?#2021-03-1920:10kennym5#2021-03-1920:12Joe LaneHave you timed how long they take to come up and start accepting traffic? If you start reporting metrics before that, they may start reporting CPU metrics against your ASG policy, lowering the utilization, and killing instances.#2021-03-1920:16kennyI have not.
I do not think that was what happened. I recall checking in the ASG activity log and saw a DesiredCount change due to CF (I think). Need to double check. Not at a computer atm and AWS mobile console is hard to navigate. #2021-03-1920:26jaret@kenny I think I understand your issue, let me know if this clarifies it for you. I think the missing piece is that AWS CF looks at parameters it sets. It only knows about parameters it has set. If you have changed your ASG settings outside of CF (i.e. your policy changed your settings) the CF defaults the parameter to the last CF-set parameter. Does that make sense? So you need to manually set these parameters or update your script to query for the currently set "DesiredCount" in you asg and inject that into your DesiredCount CloudFormation parameter.#2021-03-1920:44kennyYes, I think that’s exactly what happened.
I don’t think those extra steps are necessary though. Why is DesiredCount a required parameter?#2021-03-1920:45kenny(Your solution is inherently racy 🙂)#2021-03-2013:34jaretKenny, I think it's required because we are setting up an ASG with the CFT (per AWS requirements) when you launch the CFT. I will double check that with the team.#2021-03-2215:34kennyLooks like it's optional: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-group.html#cfn-as-group-desiredcapacity#2021-03-2218:54kennyFrom the doc ^
> If you do not specify a desired capacity when creating the stack, the default is the minimum size of the group.#2021-03-2417:24kennySince this conversation is likely to fall off of the Slack retention window, I have added a question here: https://ask.datomic.com/index.php/603/desiredcapacity-optional-parameter-query-group-template#2021-03-2109:29zendevil.eth@lanejo01, I created an env in the same vpc as the datomic system, and removed the :creds-profile and :proxy-port keys. Deploying now is giving the following error in the EB logs:
Mar 21 09:21:20 ip-10-213-10-2 web: :data {:cognitect.anomalies/category :cognitect.anomalies/forbidden, :cognitect.anomalies/message Forbidden to read keyfile at . Make sure that your endpoint is correct, and that your ambient AWS credentials allow you to GetObject on the keyfile.}
#2021-03-2109:59zendevil.ethThis is the network configuration of my EB environment:
Instance subnets: subnet-032101d2746e9b351
Public IP address: disabled
VPC: vpc-0eb74ad57465ba9df
Visibility: public
I’ve double checked that the VPC id is the same as that of datomic-humboi-march-2021 in the list of VPCs#2021-03-2115:00Joe Lane@ps what command are you running to get the anomaly exception? “Deploying” isn’t specific enough because I don’t know how your project is “deployed”.
#2021-03-2115:01zendevil.eth@lanejo01 I’m uploading the .jar file from the console ui with the java platform#2021-03-2115:03zendevil.eth#2021-03-2115:03zendevil.eth#2021-03-2116:02zendevil.eth@lanejo01 so even though the beanstalk webserver and the datomic system are in the same vpc, the keys can’t be accessed. Why?#2021-03-2116:02zendevil.eththe documentation also mentions that there’s no extra setup required#2021-03-2123:56zendevil.eth@lanejo01, @jaret any ideas why the .key files can’t be accessed by the elastic beanstalk env which is in the same vpc as the datomic system?#2021-03-2200:32Joe LaneHi @ps, I plan to discuss this with the team tomorrow. #2021-03-2215:03zendevil.eth@lanejo01 any updates? #2021-03-2215:29Joe Lane@ps I spent Sunday afternoon making a plan, check your DMs.#2021-03-2218:04kennyCan Datomic run on ARM based processors?#2021-03-2308:50tatutis there an easy way to restart query group nodes without redeploying ions? I have some things nodes do on startup that I’d like them to redo#2021-03-2309:05tatutit seems AWS autoscaling groups has an “instance refresh” functionality, which looks like it would fit the bill#2021-03-2312:44Joe LaneDoes that restart the machine or replace it?#2021-03-2313:41tatutit creates new instances#2021-03-2313:41tatuttried it in our dev env that is also production topology, and it worked well#2021-03-2313:56Joe LaneGlad it worked for you.
An important thing to remember is that i3.large / i3.xlarge instances support local valcache on the NVMe SSDs.
If you replace instances with new ones, you lose a 500 / 1000 GB SSD backed cache of your database (per qg instance) and the new instances will need to rebuild that cache, one miss at a time.
Maybe it's not a problem for your situation, but if you find that you're troubleshooting operational issues (degraded query performance, for example), refreshing (aka replacing) the instances shouldn't be the first thing you try.
All of the above assumes your QG instances are either i3.large or i3.xlarge.#2021-03-2408:26tatutgood to know… not a problem here as this is a temporary measure for something that happens rarely#2021-03-2322:52Michael Stokleycan we cancel or abort an in progress write?#2021-03-2323:07Joe LaneDo you mean a transaction?#2021-03-2323:08Joe LaneYou may be looking for https://docs.datomic.com/cloud/transactions/transaction-processing.html#cancel#2021-03-2323:22Michael Stokleyyes, a transaction#2021-03-2323:22Michael Stokleythank you!#2021-03-2408:59furkan3ayraktarI’ve asked the same question in https://forum.datomic.com/t/message-listeners-in-ions/860/4 on Datomic forum but I wanted to shoot it here as well thinking that more people might see it.
I’m wondering if anyone has a solution for this. My understanding is, it is okay to run background threads (such as polling from a queue) as long as it is coordinated through Lambda calls. However, this leaves me with another question.
Let’s say I have a Lambda ion that starts the background thread after a deployment. I can trigger that Lambda function via an EventBridge rule that watches the Datomic Ion deployments on CodeDeploy. This way, it will be certain that the Lambda function run after each successful deployment and the background thread will be started.
However, if I have one query group with let’s say three nodes where each node should start a background thread. If I’m understanding correctly, the Lambda ion call will only be executed on one of the nodes in the query group rather than all of them. Is there a specific Lambda ion type that executes in all of the nodes or am I missing something else?#2021-03-2411:09tatutyou don’t need lambda to run background thread on every node#2021-03-2411:11tatutwe have a thread that polls for configuration changes in SSM parameter store that is simply started when a particular ns is loaded#2021-03-2411:11tatutwell, a j.u.c executorservice to be exact#2021-03-2412:11furkan3ayraktarHow do you handle things like if you want to stop the background thread or if it’s stopped for some reason unintentionally, start it again? In the thread I mentioned above, Datomic team was suggesting to control the background threads via a Lambda Ion. Do you have a solution for that?
Another question regarding your setup. How do you start the background thread initially, after deployment. Do you just have a line in one of your namespaces where once it’s loaded, it starts the background thread?#2021-03-2416:39emAlso curious about this, and more generally about if it's appropriate to put "initialize" (system/start) code as side-effecting function calls rather than a check every time a handler is called?#2021-03-2419:58emPoking around more it seems like there are serious issues with that approach that could cause deploys to fail (one such experience report https://jacobobryant.com/post/2019/ion/, under "Deployment") like calling ion/get-params before any web requests. Given the official team's response on saying that all such stateful system updates should be done through lambda/external calls, @U2BDZ9JG3’s search for a better way of hitting all nodes with a request might be fairly important.#2021-03-2508:17tatutwe haven’t had need for stopping the background stuff#2021-03-2514:46furkan3ayraktar@U0CJ19XAM Do you have any best practices / solutions in this topic?#2021-03-2514:53Joe LaneWhat actual problem are you trying to solve @U2BDZ9JG3?
• "Having background threads do work" isn't a problem, it's a capability.
• "I have to process a lot of data and I need to apply back-pressure to ensure I don't overwhelm my system" is closer to a problem.
I could tell you all sorts of things about lambdas, coordination, orchestration of stateful things in a distributed system, but I'd rather have the concrete scenario.#2021-03-2515:16furkan3ayraktarThanks! I’m trying to figure out the best way to implement this and it would be very helpful if you can direct me to the right direction. Here is a concrete scenario. Let’s say I have a SQS queue and have a query group that is named worker-query-group. The each node in the worker-query-group is tasked to have a background thread which will poll from SQS continuously and process the messages received. I have this setup:
1. Ion Deploy worker-query-group
2. A new deployment is created in CodeDeploy
3. CodeDeploy deployment is successful
4. An EventBridge event is triggered after successful deployment
5. EventBridge event triggers a Lambda Ion, named sqs-poll-controller
6. sqs-poll-controller Lambda Ion is executed in one of the nodes within the worker-query-group
7. Polling from SQS is started
I can also call sqs-poll-controller Lambda Ion manually to start/stop the background thread for polling from SQS.
I have a problem when I have more than one nodes in the worker-query-group. The Lambda Ion (sqs-poll-controller) executes only on one of the instances and I’m in the search of figuring out how I can control all of the nodes within the query group with a Lambda Ion or any other way that is recommended.
I got this idea of controlling background threads via a Lambda Ion from a Datomic team member’s #2021-03-2515:36Joe LaneIs this actually your use-case? Do you actually have an SQS queue of work?#2021-03-2515:39Joe Lane> I have a problem when I have more than one nodes in the worker-query-group.
This is a problem with one of many possible SOLUTIONS to problem X. What is problem X?#2021-03-2515:51furkan3ayraktarYes, I’m not creating a non-existing issue. I have an SQS queue and a query-group dedicated for consuming messages from that SQS queue. Since there are many messages pushed to the queue, in order to increase the capacity, we wanted to add more nodes by increasing the number of instances in the auto-scaling group for the query-group. I agree, there might be different solutions to this problem. What I’m trying to learn is what is the best practice. I’m open to the ideas how to overcome the problem where there is one queue full of messages that needs to be read and processed.#2021-03-2515:52furkan3ayraktarAnd, the problem you quoted above comes from the fact that I’m trying to control (start/stop) background threads in the nodes via a Lambda Ion. I got that idea from the forum, but you can direct me a totally different approach to the root problem I’m trying to solve, which is, having a queue full of messages and needing nodes to read those messages and process.#2021-03-2516:04Joe LaneInstead of using a pull based integration (workers polling) you should assess a push based model combining https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html . You would need to enhance one of the roles we create for an ion with an additional policy (documented in the link above) to allow SQS to invoke the lambda. Technically this isn't something we officially support, but I've seen it done successfully.
A downside with this approach is that you run the risk of overwhelming the primary group with transactions from your autoscaling query group if you're not careful.
The upside is that you don't have any state to manage in a distributed system.#2021-03-2516:28Joe LaneUsing Step Functions with Ions is another approach, depending on the needs of the business problem. (e.g., it is a long running process that requires human approval steps)#2021-03-2523:41furkan3ayraktarI’ve implemented polling SQS via Lambda in another project in the past, however, Lambda polling SQS has some issues. You can see a glimpse of it here in this https://zaccharles.medium.com/lambda-concurrency-limits-and-sqs-triggers-dont-mix-well-sometimes-eb23d90122e0. For that reason, I prefer polling from the query-group rather than relying on a Lambda. I’m having hard time imagining a solution with Step Functions to this problem. Also, both cases will incur additional costs and complexity.
Anyway, my understanding is that there is no best practice around communicating to all of the nodes within the Datomic query groups. Something like a special kind of Lambda Ion which could trigger a Clojure function in all of the nodes of a query-group would be a very nice to have in order to communicate with the running nodes easily.#2021-03-2415:52joshkhi saw in the latest release notes:
• Upgrade: AWS Lambda Runtimes moved to NodeJs 12.x
does this mean faster cold starts for Ions?#2021-03-2415:55Joe LaneNo, it means that the NodeJS version used to power operational automation lambdas has been upgraded (there is a hard deprecation of the previous version around the end of March and if you don't upgrade you WILL experience an operational outage when a cluster node goes down because a new one won't come back up.)#2021-03-2415:58joshkhthanks. i realised my question was a little absurd given that it's a compute upgrade but i thought i'd ask anyway. and thanks for the warning about the hard deprecation. that's good to know.#2021-03-2511:37tatutan outage? what does it mean “cluster node goes down”?#2021-03-2513:38Joe LaneIf for some reason a machine needs to be replaced (an auto-scaling event, for example), it wont be.#2021-03-2514:15Joe LaneFWIW, it looks like AWS pushed the https://docs.aws.amazon.com/lambda/latest/dg/runtime-support-policy.html from the end of March.#2021-03-2613:17tatutok, so we should upgrade compute groups asap#2021-03-2613:20Joe LaneYep#2021-03-2416:56Joe LaneFWIW @joshkh, I looked at the https://aws.amazon.com/lambda/pricing/#Provisioned_Concurrency_Pricing and the prices look reasonable for minimum provisioning of 2. It's ~$7.67 per month (672 hours, specifically) for 2 provisioned lambdas with 256MB (ions use this) processing 1 Million requests, each taking 1 second. This isn't limiting you to two Lambdas either.#2021-03-2418:38kennyI've added a feature request to support newer AWS instance types in Datomic Cloud. If you're interested in saving 10-20% off your Datomic AWS bill, please vote this feature here: https://ask.datomic.com/index.php/604/support-for-recent-aws-instance-types.#2021-03-2418:42kennyFor those curious, I did try modifying the query group CF template to manually add those instance types in. This will fail in the ASG with the following message:
> The instance configuration for this AWS Marketplace product is not supported. Please see the AWS Marketplace site for more information about supported instance types, regions, and operating systems. Launching EC2 instance failed.
ARM based processors will fail with this message:
> The architecture 'arm64' of the specified instance type does not match the architecture 'x86_64' of the specified AMI. Specify an instance type and an AMI that have matching architectures, and try again. You can use 'describe-instance-types' or 'describe-images' to discover the architecture of the instance type or AMI. Launching EC2 instance failed.#2021-03-2513:47pvillegas12Hello! We are getting timeouts in this step of codedeploy consistently
aws s3 cp /home/datomic/.cognitect-s3-libs/.m2/repository --recursive --only-show-errors --exclude * --include crisptrutski/boot-cljs-test/0.2.2-20160402.204547-3.zip
Is this a known problem?#2021-03-2515:24jaretUpdate. This issue is resolved inhttps://docs.datomic.com/cloud/releases.html#ion-dev-282
> Improvement: Limit how long to wait for a cluster node to gracefully shutdown.#2021-03-2513:58ccortesWhat's you aproach to doing analytics? Is the out-of-the-box support good enough in your experience or do you use any other ETL tools? Mainly I'm looking for a way to work with datomic data on python#2021-03-2514:12Joe Lane@cuaucortes (assuming Cloud) have you looked at https://docs.datomic.com/cloud/analytics/analytics-jupyter.html ?
(there is a similar entry for on-prem){:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-03-2518:43futuroAre the docs (https://docs.datomic.com/cloud/index.html) down for anyone else?#2021-03-2518:43futuroI get the following when I visit that#2021-03-2518:46Joe Lane@futuro Thanks for the heads up, we'll check it out.#2021-03-2518:46futuro👍:skin-tone-2:#2021-03-2519:03Joe Lane@futuro Fixed#2021-03-2519:03futuroFantastic, thank you 🙂#2021-03-2522:27Rob BHi there.
Can anyone confirm if Datomic will run on Azure and/or any docs/blogs. I've done some googling but didn't find much.#2021-03-2620:03Ørnulf RisnesCan confirm. We run Datomic on-prem in Azure. Both peers and transactor as Azure Container Instances, with Postgresql as backend. Setup running in production since September 2020.#2021-03-2813:54Rob BThanks @UQGBDTAR4#2021-03-2611:05arohneron-prem should work just fine in Azure. Cloud most likely won’t#2021-03-2613:19thumbnailHi 👋:skin-tone-2: Just ran into this error using Datomic Analytics on premise using datomic 1.0.6269:
Query 20210326_131630_00006_px9te failed: No matching clause: :db.type/uri
#2021-03-2613:44jaretHi @UHJH8MG6S are you using the latest supported https://repo1.maven.org/maven2/io/prestosql/presto-cli/348/presto-cli-348-executable.jar I think? Or are you using another version? I want to make sure I track this so can you shoot an e-mail to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>. I will make a case to investigate internally, but if it turns out we need to bump presto again as a type mapping changed I'd like to be able to keep you updated.#2021-03-2613:45thumbnailI'm using the bundled version of datomic 1.0.6269, I'll see which one that is.#2021-03-2613:45thumbnailpresto:x> SELECT node_version FROM system.runtime.nodes;
node_version
--------------
348
So I presumably, yes the latest version.#2021-03-2614:08jaretNevermind! I am an idiot. I forgot that URI is an unsupported type in analytics.#2021-03-2614:34thumbnailAha! So this is expected behavior?#2021-03-2613:52futuroI'm using composite tuple attributes as a :db.unique/identity for an entity, and I've noticed that I need to transact the composite attributes along with the composite tuple form to get the upsert functionality. If I don't also transact the tuple, the attributes are first associated with a new EID and then an attempt is made to assert the composite tuple, which fails because it already belongs to another entity. Is this expected?#2022-10-2314:46CaseyHi futuro, sorry to necro this really old msg of yours. But did you manage to find an answer? I've got the exact same question.#2022-10-2314:50futuroHey Casey, no worries. I don’t remember getting an answer to this question, and I don’t work on that codebase anymore lol. :thinking_face: most likely I just transacted the tuple along with the attributes. #2021-03-2620:39ccortesDon't know if this is the right place to ask, but here it goes: I'm building an API using liberator and dotamic cloud. I'm trying to do some simple stuff but I can't create a connection with my datomic db, I have previously used the same db in a Luminus project and works perfectly fine.
The ns where all my datomic functions are looks like this
(ns api.db.cloud-core
(:require
[datomic.client.api :as d]
[mount.core :refer [defstate]]))
(defonce cfg {:server-type :ion
:region "region"
:system "system"
:endpoint "url"
:proxy-port 8182})
(defonce db-name {:db-name "name"})
(defstate conn
:start (as-> (d/client cfg) c
(d/connect c db-name))
:stop (-> conn .release))
...
...
There's no issue when I run lein ring server , but when I make a request which hanndle function uses something from datomic I get the following error: java.lang.IllegalArgumentException: No implementation of method: :db of protocol: #'datomic.client.api.protocols/Connection found for class: mount.core.DerefableState
So I change (defstate conn ... ) with just (def conn (d/connect (d/client cfg) db-name)) and now I can't even start the ring server. All I get is Syntax error (ClassNotFoundException) compiling at (cognitect/http_client.clj:1:1). org.eclipse.jetty.client.HttpClient
I'm completly lost since I don't know whats causing this, apparently its a dependency conlifct but my project file looks like this:
(defproject api "0.1.0-SNAPSHOT"
:description "FIXME: write description"
:url ""
:license {:name "EPL-2.0 OR GPL-2.0-or-later WITH Classpath-exception-2.0"
:url ""}
:plugins [[lein-ring "0.12.5"]]
:ring {:handler api.core/handler}
:repositories {"" {:url ""
:creds :gpg}}
:dependencies [[org.clojure/clojure "1.10.1"]
;Database
[com.datomic/client-cloud "0.8.105"
:exclusions [org.eclipse.jetty/jetty-http
org.eclipse.jetty/jetty-util
org.eclipse.jetty/jetty-client
org.eclipse.jetty/jetty-io]]
[mount "0.1.16"]
[com.fasterxml.jackson.core/jackson-core "2.11.1"]
;Partners
[buddy/buddy-hashers "1.4.0"]
[clj-http "3.12.0"]
[danlentz/clj-uuid "0.1.9"]
;Liberator
[liberator "0.15.3"]
[compojure "1.6.2"]
[ring/ring-core "1.9.2"]
[ring/ring-json "0.5.0"]]
:repl-options {:init-ns api.core})
Any tips on what should I do to connect succesfully to my db?#2021-03-2620:46ghadithe error says "class not found: org.eclipse.jetty.client.HttpClient"
and the deps have a bunch of exclusions around org.eclipse.jetty#2021-03-2620:46ghadiporque the exclusions?#2021-03-2620:46ghadithose seem implicated#2021-03-2621:46ccortesI removed the exclusions but still get en error message Syntax error (ClassNotFoundException) compiling . at (cognitect/http_client.clj:92:19). org.eclipse.jetty.http.HttpCompliance#2021-03-2622:25Joe LaneDid you lein clean and rebuild your snapshot?#2021-03-2801:54zendevil.ethI have this query:
https://gist.github.com/zendevil/4d0f1a844208ceb01375c686eec1f930
And basically I want to sort the ?content based on the attribute :content/event-timestamp-lng which is a long. What’s the best way to do that? Does it have to be using a custom aggregate or can it be achieved inside the query? If it has to be a custom aggregate, how do you combine the pull expression with it?#2021-03-2811:43thumbnailUsually you sort in clojureland, using sort-by or the likes#2021-03-2811:43thumbnailDatomic always returns a returnset, which has no order guarantees#2021-03-2900:09souenzzo(-> '[:find (pull ?content [*])
:in $ ?user-id-string
:where
[?user-id :user/id-string ?user-id-string]
[?content :content/user ?user-id]
(not [?content :content/deleted true])]
(d/q db user-id-string)
(->> (map first)
(sort-by :content/event-timestamp-lng)))
#2021-04-0110:18pmooserDoes anyone know what it tends to mean when you connect to a transactor and typically see log messages of type :log/catchup-fulltext ?#2021-04-0112:52jaretI take it you are using on-prem? Are you encountering an error or performance issue that has you concerned about this message? In general, the messages around catchup are INFO level logs that show on connect to a DB to load the latest for fulltext attributes. But this is for troubleshooting support only and not a relied upon metric.#2021-04-0117:43pmooserYes, we're using on-prem. When we see this message, there's a significant latency in connecting to datomic - sometimes minutes. The connection completes after it finishes logging this stuff.#2021-04-0800:03jaretHey @U07VBK5CJ log a support case with me and I’ll take a look. You can just email a general description to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>. If you can attach transactor logs (active and standby) that’d be ideal.#2021-04-0800:03jaretSorry for the late reply, I just saw this thread update today#2021-04-0815:01pmooserNo problem @U1QJACBUM - thanks for the reply. I've mentioned this to my boss, who deals with our datomic license. I'll see if it is possible to get the transactor logs as well.#2021-04-0206:08Oliver GeorgeI'm researching how Datomic cloud functionality fits with a user extending or constraining the schema. Simple things like adding attributes is fine. Attribute and entity predicates sound useful.
Question about entity predicates... it seems they are designed to reference predicates implemented as code and deployed. Are there ways the predicate could be defined or configured without redeploying (e.g. transacting an s-expr to use as a predicate)?#2021-04-0208:48Oliver GeorgePretty sure the answer is no for sensible reasons. One workaround might be some generic predicates configured by data accessible via the db arg. #2021-04-0214:21Joe LaneHi @U055DUUFS, I think what you're asking for is "How can I supply an arbitrary predicate in a transaction along side the data it should operate upon" Correct?#2021-04-0220:47Oliver GeorgeSounds like an interesting approach#2021-04-0213:37ennGood morning. Can someone confirm that I’m correctly understanding the following quality of composite tuple attrs?
Let’s say I have a composite tuple attr :thing/foo+bar of two attrs :thing/foo and :thing/bar. Once this attr exists, I can never create a new attr :thing/foo, even if I first rename the existing :thing/foo to something else beforehand, because the original :thing/foo participates in the composite tuple attr under its original name.
In practice this seems to be the case--I get a :db.error/cannot-retarget-ident. I’m wondering if there is any workaround here.#2021-04-0213:42ennFor example, maybe I could:
1. rename :thing/foo to :thing/foo-old
2. update the :db/tupleAttrs of :thing/foo+bar from [:thing/foo :thing/bar] to [:thing/foo-old :thing/bar] (is this possible?)
3. add new attr :thing/foo
4. update the :db/tupleAttrs of :thing/foo+bar back to [:thing/foo :thing/bar]
But I can’t tell from the documentation if it’s even possible to change the :db/tupleAttrs on an existing attribute.#2021-04-0214:02prncHello 👋
Assuming that I have quite a bit of urls i.e. :db.type/uri in Datomic and want to be able to query them, say by host.
Is this OK (1) (idiomatic, performance etc.) or should I be “unpacking urls” into attributes (2)?
;; e.g. (1) "query the object"
'[,,,
:where
[(.getHost ?url) ?h]
[(= "" ?h)]]
;; OR e.g. (2) "store expanded", and query `:uri/host`
{:uri/host "",
:uri/path "/wiki/Safari_(web_browser)",
:uri/full-uri
#object[com.cognitect.transit.impl.URIImpl 0x58ec6b0c ""]}
#2021-04-0214:42Joe LaneIt depends on:
• How many is "quite a bit"?
• Will you be filtering on any other criteria first?
• What is your SLA (if you have one)?
• The exchange of time vs space (like any algorithm)
An interesting thought exercise would be to think about 1. How much "work" is needed between the approaches and 2. When do you want to do that work?
Approach 1. Creates lots of objects at query time, which can consume memory and take CPU time (maybe a negligible amount, depends on the size of "quite a bit") but you do it every time you query for every URI you need to compare against (including ones that won't end up in the results). Maybe you don't want to create 3x as many datoms (approach 2) so you're ok with that.
Approach 2. performs the work only once, but exchanges space for that time.
Things to consider:
• How big is "quite a bit"
• How big is your database going to get? Do you have a plan for this?
• With production workloads (I assume you'll have these?) how much memory will one of these queries consume, and, how many queries do you expect to be serving at a time. This will determine how much memory you might need per peer (on-prem) / query group node (cloud). THIS MAY DECIDE YOUR ANSWER FOR YOU
• You should MEASURE THE DIFFERENCES YOURSELF! Only you can determine which solution will be performant enough!
#2021-04-0215:23prncThanks Joe! It’s a new project so I’m just getting a sense of what’s needed, experimenting w/ cloud/solo topology atm—to learn how things work. Just wanted to get a sense if both approaches are valid and get a better grasp of trade-offs, which you kindly provided 😉 So, thanks again!#2021-04-0215:25Joe LaneGlad to help, always happy to talk about this kind of stuff!#2021-04-0214:42Joe LaneIt depends on:
• How many is "quite a bit"?
• Will you be filtering on any other criteria first?
• What is your SLA (if you have one)?
• The exchange of time vs space (like any algorithm)
An interesting thought exercise would be to think about 1. How much "work" is needed between the approaches and 2. When do you want to do that work?
Approach 1. Creates lots of objects at query time, which can consume memory and take CPU time (maybe a negligible amount, depends on the size of "quite a bit") but you do it every time you query for every URI you need to compare against (including ones that won't end up in the results). Maybe you don't want to create 3x as many datoms (approach 2) so you're ok with that.
Approach 2. performs the work only once, but exchanges space for that time.
Things to consider:
• How big is "quite a bit"
• How big is your database going to get? Do you have a plan for this?
• With production workloads (I assume you'll have these?) how much memory will one of these queries consume, and, how many queries do you expect to be serving at a time. This will determine how much memory you might need per peer (on-prem) / query group node (cloud). THIS MAY DECIDE YOUR ANSWER FOR YOU
• You should MEASURE THE DIFFERENCES YOURSELF! Only you can determine which solution will be performant enough!
#2021-04-0217:05shieldsHello, I have a question about using Datomic Cloud along with https://docs.aws.amazon.com/msk/latest/developerguide/what-is-msk.html. I've https://docs.aws.amazon.com/msk/latest/developerguide/create-cluster.html the Kafka clusters in my Datomic VPC and added them to the subnets. I understand that the bastion just gives access to the Client API if I'm not mistaken. Is there a way to dev locally w/ the created "bootstrap.servers" in MSK? Any additional steps? Or is it not possible?
Apologies if I'm missing anything obvious, but any suggestions would be helpful. Thanks.#2021-04-0219:29csmAIUI MSK is only accessible within your VPC, so to connect from your local you either need your own bastion or a VPN connection #2021-04-0218:20kennyUnder "https://docs.datomic.com/cloud/releases.html#current" there is this bullet point:
> Ion libraries (`ion` and `ion-dev`) are available on the https://docs.datomic.com/cloud/ions/ions-reference.html#libraries.
The link https://docs.datomic.com/cloud/ions/ions-reference.html#libraries seems to take you to the wrong stop (just the top of the "Ions Reference" page). That doesn't seem like the desired location.#2021-04-0312:30Aleh AtsmanHello!
I am currently trying to add custom meta attributes to resources that are created by datomic cfn template. I have difficulties adding tags to datomic buckets. For example in the cfn-template datomic-production-compute there is a custom resource EnsureBucket :
{
"DatomicCodeBucket":{
"Type":"Custom::Resource",
"DependsOn":[
"EnsureBucketLogGroup"
],
"Properties":{
"ServiceToken":{
"Fn::GetAtt":[
"EnsureBucket",
"Arn"
]
},
"BucketBaseName":"datomic-code",
"TagKey":"datomic:code",
"TagValue":{
"Ref":"AWS::Region"
}
}
}
}
How do I add my custom tags to this resource? It is accepting only TagKey and TagValue. In the code of custom resource I can't find any logic for handling additional tags.
I was planning to have a traversal utility which will walk the cf tree and add tags based on resource type, but because many resources are custom resources with very limited api, it seems to me that there is not much I can actually do.
Any advice on this? The requirement comes from organizational rules. Should I come up with solution that will add tags after deploy?
Thank you!#2021-04-0503:19Michael LanHi! I have loaded the sample data from here: https://docs.datomic.com/cloud/dev-local.html#samples and am wanting to experiment with this data. However, I don’t know how I can view the data… I would appreciate some pointers or links to guides#2021-04-0511:28souenzzoWhy do d/transact do not return the "tx" number?
How is the easiest way to get the tx from the return of a d/transaction?
(let [conn (-> "datomic:"
(doto d/delete-database
d/create-database)
d/connect)
{:keys [tx-data]} @(d/transact conn [])]
(d/q '[:find ?tx .
:where
[?tx _ _ ?tx]]
tx-data))#2021-04-0511:35mkvlr@souenzzo I think you can use db-after and use that to get basis-t https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/basis-t followed by t->tx https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/t->tx#2021-04-0511:38souenzzoHow it can be done in Cloud?!
Cloud API do not have the t->tx operation#2021-04-0511:39mkvlr@souenzzo oh sorry didn’t know about that, I’m just using on-prem.{:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 3")}
#2021-04-0511:40souenzzo@mkvlr I'm also on on-prem, but many times we "consider move to cloud"
I'm just accumulating facts to argue that on-prem still way better then cloud.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-04-0511:44mkvlr@souenzzo yeah, we also also considered supporting cloud but the lack of tx-report-queue and the entity api has been a problem for us. We rely heavily on both. (Besides us being currently on google cloud.)#2021-04-0618:51Michael Lanis there a way to print the schema of a database?#2021-04-0618:54Joe Lane@michaellan202 Check out https://github.com/cognitect-labs/day-of-datomic-cloud/blob/master/tutorial/schema_queries.clj, print whatever you want 🙂#2021-04-0618:54ghadi@michaellan202
since schema is still plain data within datomic, you can query for it and print it (using same schema attributes that you use to define it)#2021-04-0618:55Michael LanYea, this was the bit that confused me. So schema is just defined with datoms?#2021-04-0618:55Joe LaneYup#2021-04-0618:55Michael LanThank you both!#2021-04-0618:57ghadi[?n :db/ident ?ident]
^ give me all the named entities in the database#2021-04-0618:57ghadithen you can pull out their :db/valueType or :db/cardinality{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-04-0619:02Michael LanThere seems to be a lot of duplicates? I vaguely remember from a video how to remove duplicates by putting the output into a set but I can’t recall how to do it. Any tips?#2021-04-0619:03Joe Lane(into #{} cat (d/q my-query db))#2021-04-0619:04Michael LanThanks. It turns out there weren’t any duplicates, just a very weird schema. I’m looking at the mbrainz-subset sample dataset right now 😁#2021-04-0619:05Joe LaneWhy do you think that a weird schema? Weird compared to what?#2021-04-0619:26Michael LanThere are a lot of :language/<3 random letter> idents, here is a snippet of the output:
#{:language/mmz :language/thq :language/mdc :language/cno :language/tdk
:language/orr :country/CX :language/nxu :medium.format/dvd
:language/xku :country/MV :language/bil :language/wri :language/zoo
:language/bdu :language/tuc :language/mlh :language/anf :language/kdi
:language/ahi :language/mec :language/kxd :language/bau :country/GG
:language/osa :release/script :language/nki :language/acw
:language/hmi :country/SN :language/lcp :language/ces :language/rej
:medium.format/vinyl :language/cog :language/bfi :language/sfs
:language/brz :language/dae :label/type :language/zuh :language/phw
:language/uam :language/lbu :language/tak :language/bmd :language/chz
:language/jia :language/pic :language/nfa :language/jel :language/gic
:language/kzr :language/yiy :language/lmh :language/ktq :language/trh
:language/hix :language/krl :medium.format/cassette :language/ntj
:language/kvm :language/sld :language/apl :language/guo :script/Lisu
:language/bfx :language/bcl :language/duv :language/pcj :language/bjr
:language/oaa :language/mbi :country/BQ :language/etr :language/tsd#2021-04-0621:08Lennart BuitIt’s pretty common to define enumerable values as idents. Allows you to refer to them with keywords (that check whether the ident exists!) instead of something like ordinal values. In your particular example, these idents appear to be ISO 639-2 language codes 🙂.
https://docs.datomic.com/cloud/best.html#idents-for-enumerated-types{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-04-0622:35Michael LanWhat does Only find-rel elements are allowed in client :find mean? I am trying to do:
(d/q '[:find [?month ...]
:where [_ :release/month ?month]]
db)
and the ... is causing this error#2021-04-0622:44favilaThat syntax is only supported on on-prem peer api#2021-04-0622:51Michael Lanthat’s odd. thanks#2021-04-0705:51ayushkaI'm having trouble with datomic client q api, where the aggregate function count returns an empty list instead of 0 if no items were found. What am I doing wrong here? Thanks in advance
(d/q '[:find (count ?e)
:where [?e :migration/name "non_existing_name"]]
db) ;; => [] instead of [[0]]#2021-04-0712:21ayushkaended up doing count outside the query
(count (flatten (d/q ...etc)))
#2021-04-0713:06favilaAggregation functions are not called on empty resultsets#2021-04-0713:08favilaThe flatten shouldn’t be necessary.#2021-04-0713:09favilaif you want to save the bandwidth of returning all items, you can use 0 as a fallback e.g. (or (ffirst result) 0)#2021-04-0714:05ayushkanoted, thanks for the tip @U09R86PA4#2021-04-0807:19babardoI see the default behaviour for adding new entity with pre-existing :db/unique is upsert.
It there a convenient way to turn this behaviour down?
In other words I would want to insert an entity if it doesn't exist or fail.#2021-04-0809:52tatutIsn’t it so that the upsert behaviour is when :db.unique/identity is specified#2021-04-0809:52tatutand not when :db.unique/value#2021-04-0810:10tatut(defn ensure-non-existant [db [lookup-attr lookup-val :as lookup-ref]]
(when (ffirst
(d/q [:find '?e :where ['?e lookup-attr '?val] :in '$ '?val]
db lookup-val))
(throw (ex-info "already exists"
{:lookup-ref lookup-ref})))){:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-04-0810:11tatutyou could do something like that in a tx function and call it (ensure-non-existant db [:my/id-attr "42"])#2021-04-0810:13babardoyes sorry i meaned db/identity , so it's seems that querying the database prior to the transact is the only option we have here#2021-04-0810:44Lennart BuitYou can write it as a database function to guarantee atomicity#2021-04-0813:54futuroYour described use-case sounds like you want :db.unique/value instead, as that will throw an anomaly if you attempt to transact a new entity with an existing value for a :db.unique/value attribute.#2021-04-0813:55futuroThat seems simpler than using transaction functions; am I missing something?#2021-04-0819:55babardook sorry, I get the first answer of @U11SJ6Q0K and the the one of @U0JJ68RBR now.
Using :db.unique/value instead of :db.unique/identity do the job, meaning we can still make lookup ref on this field (as :db.unique/identity) but an upsert fails with an Unique conflict exception.#2021-04-0821:18futuroYep!#2021-04-0815:21futuroI've got an attribute whose potential values are enums, so I followed the best practices guide and made its valueType a ref; I'd like to constrain the possible values to the small set of enums I've set up as {:db/ident :some.enum/value}, but attribute predicates don't seem like a good fit because they get the EID instead of the keyword, which requires a DB query to link the two.#2021-04-0815:32favilaIf closed-set validity is more important to you than extensibility, consider just using a keyword and checking the value with an attribute predicate#2021-04-0815:33favilaThe benefit of enums as refs is that you can rename them with ident semantics and you can attach additional assertions to them. If you don’t need that, that’s ok#2021-04-0815:34futuroI'd never considered attaching additional assertions to them, that's really interesting. What kind of assertions would you put on the enum itself?#2021-04-0815:35favilaAdditionally, this “best practice” was formulated before attr predicates or entity predicates existed. In those days, all constraint checking was best-effort and application-enforced. Making the enums a ref gave at least some possibility of putting a queryable metaschema into your db#2021-04-0815:36favilare “what assertions”: well some you get for free, like the transaction that introduced the enum#2021-04-0815:38favilayou could also place enums into a hierarchy (like e.g. multimethod heirarchies) or reference them from an “enum set” (i.e. the attribute, or some other enum, references the attributes as its legal value range in a metaschema) or with a :db/doc for a human to read, etc#2021-04-0815:39favilaall of this also lets your enum set be more dynamic in the application. Using an attribute predicate would require a redeployment to add or change the schema#2021-04-0815:40favilanote that neither method protects you from some schematic change that makes some existing enum values invalid#2021-04-0815:41favilaan entity predicate will catch it and fail the transaction if you use it to verify things opportunistically, but that’s the only tool available to you. For everything else you need to manually remove or change old values, make sure everyone stops writing them, then update your attribute predicate or homespun meta-schema#2021-04-0815:42futuroWhen you say "metaschema", do you mean that I might have an attribute called :valid-enum-vals that contains the EIDs of the enums that are currently valid?#2021-04-0815:43favilayes, I mean you have some attributes that describe entity-level (or higher) schema relationships#2021-04-0815:44favilaE.g.: https://www.youtube.com/watch?v=sQCoTu5v1Mo#2021-04-0815:45futuroThank you for this link, I'm gonna watch that right now#2021-04-0815:46favilaNote that it’s pretty old (2013). It’s still relevant from a data-modeling perspective, but you would probably leverage entity predicates, required attributes, and attribute predicates more now#2021-04-0815:46favilaalso, you may not necessarily want to be so rigid about entity-level schemas#2021-04-0816:16futuroThat was a really interesting talk, thank you 👏:skin-tone-2:#2021-04-0816:17futuroIt's given me a lot to think about, and it's also gotten me curious about how the RDF world handles this problem (and whether it matches with Antonio's proposed solution)#2021-04-0906:20tatutin our app we have :enum/attribute in all enum values that point to the attribute the enum value is valid for… when we create web forms we automatically query the options from the database based on that#2021-04-0915:10futuroThat's a really interesting idea as well.#2021-04-0815:21futuroHow have folks approached constraining enums-as-refs?#2021-04-0815:23futuroI've contemplated an entity-predicate, but that requires programmer involvement instead of the db automatically checking it for us (as far as I understand from the docs), and I was hoping for something that happened without programmer effort (and thus couldn't accidentally be forgotten).#2021-04-0818:40cjsauerIs there a smooth upgrade path from dev-local to datomic cloud? I see import-cloud, but that seems to only work in the cloud->local direction.#2021-04-0818:40Joe Lane"upgrade path"?#2021-04-0818:41cjsauerWell, I should rephrase. Is it easy to import data in the other direction? I imagine it’s as “simple” as streaming all the datoms in dev-local into the cloud db, but just double checking.#2021-04-0818:42cjsauerI have a tiny application that I’d like to use dev-local for, and one day it might need something more robust. Just weighing my options on how easy that move would be.#2021-04-0818:42Joe LaneIs your tiny application running on the internet, or on your laptop?#2021-04-0818:43cjsauerJust laptop at the moment.#2021-04-0818:45Joe LaneOk, this is good. Once your application is bigger than your laptop, use cloud.
Unfortunately it isn't as "simple" as "upload the datoms", and there are a variety of ways depending on if you need history or not, if the db already exists in c loud, etc.#2021-04-0821:25kennyIf you don’t need history, what’s the recommended approach?#2021-04-0818:47cjsauerAh okay, glad I asked. So dev-local is really not meant as a standalone database that I’d run on, say, a VM exposed to the internet. I figured I might be able to get away with doing that and could move to cloud later.#2021-04-0818:48Joe LaneCorrect, its for developing, locally (and/or tests in CI)#2021-04-0818:50cjsauerGotcha, thanks Joe. I’ll look into cloud.#2021-04-1021:46cjsauerThink I found a small typo in the docs for https://docs.datomic.com/cloud/schema/schema-reference.html#tuples, specifically “homogenous variable length tuples” says that :db/tupleType is a vector of keywords. According to https://docs.datomic.com/cloud/schema/schema-reference.html#homogeneous-tuples tho it’s just a single scalar keyword.#2021-04-1116:28hiredmanIt is ambiguously worded#2021-04-1116:36cjsauerI think it’s less ambiguous and more just plain incorrect. It looks like a copy/paste of the heterogenous tuple bullet point that was then overlooked.#2021-04-1117:10hiredmanMaybe it was already fixed since you mentioned it, because I don't see that in the linked text#2021-04-1117:25cjsauerHm, still the same for me. From https://docs.datomic.com/cloud/schema/schema-reference.html#tuples, the text in question is:
> https://docs.datomic.com/cloud/schema/schema-reference.html#homogeneous-tuples have a `:db/tupleType` attribute, whose value is a vector of 2-8 keywords naming a scalar value type.
However :db/tupleType is cardinality one:
[#:db{:id 66
:ident :db/tupleType
:valueType #:db{:id 21, :ident :db.type/keyword}
:cardinality #:db{:id 35, :ident :db.cardinality/one}}]#2021-04-1117:36hiredmanAh, I was just clicking on the wrong link in your previous message#2021-04-1122:28Oliver GeorgeTiny typo in the Datomic Cloud https://docs.datomic.com/cloud/schema/schema-reference.html.
> :db/type/boolean#2021-04-1215:10ayushkaI'm new to datomic and datalog, but it seems this is what the top answer on stack suggests for counting and grouping aggregates. I'm not trying to criticize or anything but I can probably write this query in SQL in ~5 lines. What am I missing about datalog? I'm quite frustrated at this point...
(defn find-by-id
[conn id]
(let [db (d/db conn)]
(first (d/q '[:find
?eid ?id ?title ?content-type ?content-url (sum ?likes) (sum ?dislikes)
:keys
:eid :id :title :content-type :content-url :likes :dislikes
:with
?data-point
:in
$ ?id
:where
[?eid :post/id ?id]
[?eid :post/title ?title]
[?eid :post/content-type ?cref]
[?cref :db/ident ?content-type]
[?eid :post/content-url ?content-url]
(or-join [?eid ?data-point ?likes ?dislikes]
(and [?like :interaction/interactable-id ?eid]
[?like :interaction/type :interaction.type/like]
[(identity ?like) ?data-point]
[(ground 1) ?likes]
[(ground 0) ?dislikes])
(and [?dislike :interaction/interactable-id ?eid]
[?dislike :interaction/type :interaction.type/dislike]
[(identity ?dislike) ?data-point]
[(ground 1) ?dislikes]
[(ground 0) ?likes])
(and [(identity ?eid) ?data-point]
[(ground 0) ?likes]
[(ground 0) ?dislikes]))]
db id))))#2021-04-1220:18em@U01GXF0SQ48 So this is a pretty common problem to solve, and in the very beginning of using Datomic I struggled with similar things. The biggest mistake was thinking of it as a traditional database, where you had to shove the entire query into one request, and compose a huge complicated hairy mess, just because the database was "over there".
One big difference with Datomic is that in both Peer and Ions, your code literally runs in memory with the data, and you really shouldn't limit yourself to writing giant hairballs like your current solution. It's not very maintainable and kind of orthogonal to the ideas behind Datomic of composability, and generally I find if my query is longer than 10 lines there's something going horribly wrong.
Here's a couple of solutions:
1.) If you had the ability to introduce a direct counts attribute you could simplify your life a lot and cache these lookups - these could be guaranteed with transaction functions. Would be pretty simple to implement. Then your query would literally be 3 lines, a simple pull expression.#2021-04-1220:20em2.) Break the query down! Your solution may be technically correct, but there's no reason not to break the query down. The work done by the instance is almost the same, and if you break down the same logic into multiple queries, you get the additional benefit of more granular query caching. And reusability. And composability with more parts of your application. And readability. Consider a generic helper function that counts interactions:
(defn post-interaction-count
[interaction db post-id]
(ffirst (d/q '[:find (sum ?likes)
:in $ ?post-id ?interaction
:where [?eid :post/id ?post-id]
[?likes :interaction/interactable-id ?eid]
[?likes :interaction/type ?interaction]])
db post-id interaction))#2021-04-1220:21emThis is pretty generic for posts, and you could then have:
(def post-likes (partial post-interaction-count :interaction.type/like))
(def post-dislikes (partial post-interaction-count :interaction.type/dislike))
#2021-04-1220:24emAnd then your complete function is super simple, and very readable:
(defn post-by-id
[db post-id]
(-> (d/q '[:find (pull ?eid [:post/id :post/title :post/content-url {:post/content-type [:db/ident]}])
:in $ ?post-id
:where [?eid :post/id ?post-id]]
db post-id)
first
(merge {:post/likes (post-likes db post-id)
:post/dislikes (post-dislikes db post-id)})))#2021-04-1220:27emIf :post/id is registered as an identity attribute you could simplify the pull expression even further with the pull API, shaving off another 2-3 lines.
Obviously I didn't have access to your setup/database etc., so the code above may not run as-is, and notably I changed the semantics a bit, like passing around a db instead of a conn. (reason: a lot of times in the context of one web request, you actually want to keep the database value the same, and only request it once on the connection per request. Every post lookup on potentially different databases kinda defeats the purpose of Datomic's ability to give you the db as-of, which solves lots of application bugs and other unwanted issues/race conditions).
Hope this helps!{:tag :div, :attrs {:class "message-reaction", :title "dart"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🎯")} " 3")}
#2021-04-1313:31ayushka@UNRDXKBNY wow that was a blog post level reply. Thanks so much for the pointers.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-04-1216:47souenzzoHow to fix Error building classpath. Could not find artifact com.datomic:ion:jar:0.9.50 in central () ?
- My ~/.clojure/deps.edn contains "datomic-cloud" {:url ""}
- I can do aws s3 ls
- it's a well-configured project (other developers are using it)#2021-04-1217:19Joe Lane@souenzzo Use the --profile and --region flags to make sure the java process which gets the jar has those credentials.{:tag :div, :attrs {:class "message-reaction", :title "question"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("❓")} " 3")}
#2021-04-1217:23Joe LaneI need more information about what program you're running that throws Error building classpath. .... I don't have nearly enough details to help.#2021-04-1219:21futuroDo attributes predicates need to be explicitly allowed in the datomic/ion-config.edn file? Last week I didn't specify it and everything worked alright in dev-local, and this week I'm getting an anomaly returned saying I have to add the function to the :allow list in the ion config, though the docs for attribute predicates don't list this as a requirement.#2021-04-1219:22futuroI'm fine either way, but the change in behavior from last week to this, plus the lack of a mention in the docs, leads me to wonder if something else is going on.#2021-04-1219:22Joe Laneattribute predicates are ions#2021-04-1219:22Joe Laneions used in transactions must be in the :allow list#2021-04-1219:23futuroAha{:tag :div, :attrs {:class "message-reaction", :title "100"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💯")} " 3")}
#2021-04-1219:27futuroThat makes sense, thanks Joe!#2021-04-1310:51lambdamHello,
I submitted a new ident to production that I declared as a float.
I never used it (no datom submitted for this field yet) but now want it to be a double instead.
I tried to excise and alter but didn't make it.
Am I forced to find a new name (~ new ident) for this field?
Thanks#2021-04-1311:05lambdam---
Ok, I found a way (every step is a chronological migration):
1. I submitted wrong type for an ident
2. I rename this ident (`:domain/field` > :domain/old-float-field)
3. I declare a new ident that is the same than the old one (`:domain/field`) but with a different type (double in my case).
I can see that those idents in the end have different ids.
Since I never submitted any datom that use the old ident, is there any downside on the technical side and on the domain modelling side?#2021-04-1311:51jaret@dam if you didn't use the "old attribute" then you don't have to migrate values to the new ident. However, you should be aware that d/history will still point to the previous entry and :db/ident is not t-aware. But since you didn't transact against it there won't be much of a downside at all.#2021-04-1312:02lambdamThanks !#2021-04-1314:03stuarthallowayWould love any feedback folks have on https://docs.datomic.com/cloud/tech-notes/writing-a-problem-report.html#2021-04-1316:01futuroThis is a good write-up; specifically, it outlines what makes a good report, roughly describes how one goes from experiencing a problem to creating a good report, talks briefly about the tradeoffs surrounding making a good report and how you might iterate from a not-so-good report to a good one (helping reduce the inertia barrier to submitting good reports), and reiterates in multiple places the concrete things that make a report good or not-so-good which helps reinforce the point.#2021-04-1314:04stuarthallowayI also unpinned a bunch of stuff, not sure about the etiquette but when everything is important, nothing is.{:tag :div, :attrs {:class "message-reaction", :title "raised_hands"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙌")} " 8")}
#2021-04-1319:33kennyOur Datomic Cloud workloads are very bursty. I am evaluating a switch to ddb on-demand mode instead of provisioned. Is there anything to know about Datomic Cloud & ddb on-demand mode?#2021-04-1319:39Joe LaneWhat is your hypothesis kenny?#2021-04-1319:53kennyIt will just work.#2021-04-1319:54kennyIt'd surprise me if there was anything baked in about the particular ddb capacity mode. I'd prefer not to be surprised 🙂#2021-04-1319:56Joe LaneNo, I mean, why do you think on-demand will make X better?
What is X?#2021-04-1320:03kennyDDB autoscaling is too slow. By the time DDB scales up to meet our load, the event is over. We could increase the ddb provisioned read or write to meet the max utilization but then we need to pay for peak periods 100% of the time.#2021-04-1320:05Joe LaneYou know what I'm going to ask, don't you?#2021-04-1320:06kennyDDB read usage example. Our reads spike very high for a short period of time. DDB auto scales read up. By the time it scaled up, we no longer needed the capacity. We're now paying for a whole bunch of extra capacity until ~17:27 when it scales back down to where it stated. Also, scaling provisioned capacity down is limited to 4 times per day.#2021-04-1320:07kennySo if that event happened more than 4 times (it does), we're stuck with the ddb scaled up bill for the remainder of the day.#2021-04-1320:07kennyPerhaps how I know this is happening? 🙂#2021-04-1320:09Joe LaneWhat happens at 16:00 that causes reads to spike?#2021-04-1320:09kennyBatch job.#2021-04-1320:17kennyExample showing ddb failing to scale down due to hitting max scale down events per day.#2021-04-1320:21Joe LaneYou should measure the cost differences between on-demand and provisioned.#2021-04-1320:24Joe LaneI was worried you were assuming that increasing the ddb capacity was going to improve your throughput, but it sounds like you're just trying to optimize cost. FWIW, on-demand is more expensive than provisioned throughput, so you should measure very carefully to make sure you don't end up losing money instead of saving it.#2021-04-1320:24kennyYep. Sounds like you haven't heard of anyone having issues with switching provisioning modes?#2021-04-1320:33Joe LaneIt always depends 🙂#2021-04-1320:35Joe LaneWhatever you do, just pay attention to the bill and the perf and make sure it was worth it.{:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 2")}
#2021-04-1320:36kennyWhat does it depend on?#2021-04-1320:37Joe LaneWhat problem that customer actually had.#2021-04-1320:47kenny> increasing the ddb capacity was going to improve your throughput
Does this not hold when a compute group is already saturated with ops?#2021-04-1320:49Joe LaneRecur, see above 🙂#2021-04-1320:52kennyRight, it depends. What is an example situation in which increasing ddb capacity does not improve throughput?#2021-04-1320:53Joe LaneYou're issuing transactions at a faster rate than datomic can index, then you'll get an anomaly back, no matter how much ddb you provision. That sustainable throughput rate is specific to your system though, and can vary between different systems / customers.#2021-04-1320:57kennyMakes sense. Is this the anomaly to which you are referring?
{:cognitect.anomalies/category :cognitect.anomalies/busy, :cognitect.anomalies/message "Busy indexing", :dbs [{:database-id "f3253b1f-f5d1-4abd-8c8e-91f50033f6d9", :t 83491833, :next-t 83491834, :history false}]}#2021-04-1320:57Joe Laneyep#2021-04-1320:58kennySo until indexing is finished, all writes will return that anomaly?#2021-04-1320:58Joe Lanefor that database, yes{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-04-1321:01kennyNo effects for other databases? (totally different topic at this point 🙂 I just encountered this exact anomaly a day ago so it's of particular interest)#2021-04-1321:02kennyAlso, I'm assuming "that database" means all databases listed under the :dbs key in the anomaly?#2021-04-1321:10Joe LaneWell, your primary group nodes are likely under pretty high load (and CPU utilization) at that point, so, yes there are effects on other databases, because it's allocating resources away to do this big indexing job and process transactions.#2021-04-1321:19kennyHmm, I guess I'm confused by your "for that database, yes." It sounds like one of these things are true when that anomaly is returned:
1. Writes to any database will fail if a Busy indexing anomaly is returned.
2. All writes to the database currently being indexed (the databases listed under the :dbs keys) will fail and writes to other databases may or may not succeed.
3. Writes to any database may or may not succeed.#2021-04-1322:26ghadiIf you understand when your load is going to occur, you could write an lambda that imposes a “perfect” autoscaling policy#2021-04-1322:26ghadiIn other words you could take the policy into your own hands, rather than rely on ddb’s scaler, which is reactive#2021-04-1322:47kennyWhile that is true, you’re still limited by DDB’s maximum of 4 downsizing events per day. #2021-04-1322:54ghadi27, not 4#2021-04-1322:54ghadiYou accumulate an extra event each hour that elapses {:tag :div, :attrs {:class "message-reaction", :title "exploding_head"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🤯")} " 6")}
#2021-04-1402:04jaretI did not know that! Cool.#2021-04-1402:28ghadihttps://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html#decreasing-increasing-throughput#2021-04-1415:28kennyOh, cool. So 4 the first hour and 1/hour until the day ends. It's an interesting possibility. Not exactly trivial since each query group points to the same ddb table. You'd need to understand when the load would occur for each compute group in the system.#2021-04-1415:30ghadiddb default scaling policy is reactive -- if you know when the load is going to arrive ahead of time, you could make a cron-based policy#2021-04-1415:30ghadihttps://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-scheduled-scaling.html#2021-04-1415:31ghadior you can take fate into your own hands and have a lambda fire periodically that controls scaling#2021-04-1415:31ghadibut -- should only have one controller in charge. Policies don't compose#2021-04-1623:24adamtaitCould you use the cognitect-labs/aws-api and fire the DDB scaler from the process that starts the batch job? Maybe you could even wait on the scaling completed.
I do something similar to start up an expensive EC2 instance running a web driver just before my crawler starts. The aws-api call to start the EC2 instance blocks waiting for the instance to finish starting. {:tag :div, :attrs {:class "message-reaction", :title "point_up"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("☝️")} " 3")}
#2021-04-2521:26kennyFor those interested, we switched to on-demand mode 4 days ago and all the DDB provisioned throughput problems have gone away 🙂 As an added bonus, our ddb costs dropped ~30%.{:tag :div, :attrs {:class "message-reaction", :title "clap"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👏")} " 2")}
#2021-04-2521:44Joe LaneGreat to hear Kenny !#2021-04-1322:19bhurloware there any issues with using both memcached and valcache in the same peer (on-prem) deployment at the same time?#2021-04-1322:33favilaSame process? AFAIK you can’t do this at all#2021-04-1402:06jaretYep you cannot do that on the same process. You can however make the choice independently per process. https://docs.datomic.com/on-prem/operation/valcache.html#vs-memcached#2021-04-1513:26bhurlowGreat thanks, that makes a lot of sense#2021-04-1412:29tatuthas there been any news on cloud disaster recovery, the discussions don’t seem active https://forum.datomic.com/t/cloud-backups-recovery/370/10 we currently have a hand rolled tx log backup/restore solution but it’s a bit painful to maintain#2021-04-1415:24kennyMaybe you're aware already but there's a feature request on ask.datomic: https://ask.datomic.com/index.php/431/cloud-backup-and-recovery#2021-04-1505:00tatutyes, I’ve upvoted that but it doesn’t seem to have more info#2021-04-1422:54kennyThe recommendation for enums in Datomic is to https://docs.datomic.com/cloud/schema/schema-modeling.html#enums. Using :db/ident as an enum value complicates the use with d/pull. Everywhere you pull an entity, you'll need to write code to deal with the nested enum entity. For example, say I have a :user/type enum that is defined as suggested by that doc (card one, ref attr). Anytime I pull my :user/type attribute, I need to write code to unnest the :user/type value.
(d/pull (d/db conn) '[:user/type] [:user/id "a"])
=> #:user{:type #:db{:id 87960930222153, :ident :user.type/a}}
How are folks dealing with this? Just always wrapping the d/pull return with something like (update :user/type :db/ident)? Perhaps always remembering to specify the pull pattern for all enum attributes as (:user/type :xform my-ns/get-db-ident), where my-ns/get-db-ident is just (def get-db-ident :db/ident)?#2021-04-1423:03Tyler Nisonoffone way i was doing this was to take my nested entity thats returned from the pull and then run it through a post-walk like so:
(clojure.walk/postwalk
#(match %
{:db/ident ident} ident
:else %)
nested-entity)
if you know that all maps with db/ident are referencing enums you want to unwrap{:tag :div, :attrs {:class "message-reaction", :title "clap"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👏")} " 3")}
#2021-04-1423:14kennyOh, nice! That's a pretty heavy penalty to pay for using enums as idents though.#2021-04-1505:57thumbnailAt work we tried both suggestions, and switched from the postwalk approach to the xforms. Mostly because of performance.
We tend to def our pull expressions for re-use, so the xforms are in 1 place. Needing to remember it's a db/ident isn't much of a problem for us, it's required to pull db/ident anyway.
But one could always write a linter make sure that xforms is used for idents.#2021-04-1508:18tatutis walk so slow? I’ve used it heavily and never had it be a problem in my work loads#2021-04-1509:56thumbnailWe had a specific usecase where we had very big resultsets (~150,000 entities + nesting). That's when we replaced the postwalk. Generally it's fine for sure.{:tag :div, :attrs {:class "message-reaction", :title "sweat_smile"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😅")} " 6")}
#2021-04-1512:36tatutthat is a big result, out of curiosity, how much was the difference between walk and xform with that result?#2021-04-1516:10thumbnailUnfortuinely I don't have exact numbers. We also optimised the query at that time.#2021-04-1516:40kennyWe also have very big query results. #2021-04-1521:43thumbnailTo give a very rough idea about our postwalk approach vs a direct update fn;
(time
(do (map (fn [x] (update x :field :db/ident))
corpus)
nil))
"Elapsed time: 0.093924 msecs"
=> nil
(time
(do (walk/postwalk (fn [x]
(if (and (map? x)
(:db/ident x))
(:db/ident x)
x))
corpus)
nil))
"Elapsed time: 552.667018 msecs"
=> nil
(corpus is a list of 150,000 maps btw){:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-04-1605:58tatutinteresting, thanks#2021-04-1605:58tatutbut isn’t the first one not a good benchmark, as map returns lazy seq{:tag :div, :attrs {:class "message-reaction", :title "sweat_smile"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😅")} " 3")}
#2021-04-1612:41thumbnailYou're absolutely right. I knew something was up. #2021-04-1713:41thumbnail(time
(do (mapv (fn [x] (update x :field :db/ident))
corpus)
nil))
"Elapsed time: 142.214586 msecs"
=> nil
For completionist sake:#2021-04-2521:33kennyThought I'd persist this question on ask.datomic so others can reference it and add their opinion: https://ask.datomic.com/index.php/606/recommended-way-to-handle-db-ident-enums-when-used-with-pull{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-04-1516:10uwoI was going to post this question here, and then I thought the forum might be a better place. Happy to have any input! https://forum.datomic.com/t/query-populous-business-entity-where-there-are-changes-at-any-level/1830#2021-04-1905:02tatutthat looks quite tricky, can’t really say how make the query faster, but if you could update some attr on the root every time you update some descendant (like :root/recursive-modification-time timestamp), you could simply query all roots based on that#2021-04-1905:04tatutin effect moving the “has some descendant changed” determination from query time to tx time#2021-04-2619:45uwoHey, I never thanked you for this response, so thank you! When we discussed it on the team that was definitely a solution that came up when we asked "how would we have done this in SQL-land". I personally suspect that something may be wrong with my recursive rules, but I'm still working through some of the details. Jaret posted a great set of questions for us, which I'm working on answering#2021-04-1601:11cjsauerIs there a secret namespace that one can access for parsing pull expressions into an AST?{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-04-1601:23favilahttps://github.com/edn-query-language/eql#ast-encodedecode#2021-04-1601:24favilaIt’s not perfect, it doesn’t understand some attribute option syntax#2021-04-1601:33cjsauerThat is really close. I see it treats parameters differently, but only slightly.#2021-04-1601:44kennyWe write our queries in eql and have a transform function to convert ast to Datomic syntax. #2021-04-1602:02cjsauerNice, that’s a good approach.#2021-04-1616:56souenzzohttps://github.com/souenzzo/eql-datomic/{:tag :div, :attrs {:class "message-reaction", :title "clap"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👏")} " 3")}
#2021-04-1623:38cjsauerVery nice! Thanks!#2021-04-1706:18pinkfrogA user will generated events which serves two purposes:1. update the db. 2. tell downstream event handler.#2021-04-1706:19pinkfrogI wonder if this thing is possible with Datomic alone instead of Datomic + Kafka.#2021-04-1713:15potetmYou can process the transaction log and use it to trigger downstream events.#2021-04-1713:17pinkfrogYup. Think about that too. Is it a production-used approach? Are you aware of any working example?#2021-04-1714:19potetmYeah, they use this to do decanting.#2021-04-1713:15potetm@i#2021-04-1802:11pinkfrogThe web page on datomic and rebl: https://docs.datomic.com/cloud/other-tools/REBL.html#rebl-datomic points to a local file url.#2021-04-1802:38jarrodctaylorThanks for the report.#2021-04-1909:38jamesmintramHey all. I asked a little while back about doing multi-tenant apps with datomic and a couple of ideas came up. One of those was using a single DB per customer. So the connection string might look like : "datomic:"
This looks interesting - do this mean that all databases backed by the same Postgres instance (for example) share the same DB?
In which case, the only different between:
"datomic:"
and
"datomic:"
Is some sort of partioning key that Datomic uses to keep data separate within it's backing store?#2021-04-1913:39jaretHi, For on-prem we generally recommend that you run a single logical database per transactor (pair). Some customers use additional DBs for operational tasks, but generally a datomic on-prem system (and license) includes a transactor pair, single DB, associated peer applications. Perhaps we can discuss your needs for separate DBs. Have you evaluated https://docs.datomic.com/cloud/whatis/architecture.html? Cloud is more suitable for per db mutli tenancy. How many DBs do you envision long term? What are the sizes of each DB that you expect? If you'd like to share more details you can shoot a ticket to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> and/or we can arrange a call to discuss.#2021-04-1909:42jamesmintramIf that is the case, then is it feasable to "open" a new DB connection per http request? or keep a pool of
From the docs
Datomic connections do not adhere to an acquire/use/release
pattern. They are thread-safe and long lived. Connections are
cached such that calling datomic.api/connect multiple times with
the same database value will return the same connection object.
#2021-04-1910:25prncHi 👋
I am looking into application level access restriction/permissions to entities.
I couldn’t find any specific guidelines for cloud, that would allow for a separation of concerns between query logic and permissions.
In on-prem filter seems to be the useful for that. What are some good ways to go about this for datomic cloud?
[It depends ofc, it’s early stage so I don’t have specific requirements, just looking for some general guidelines/mechanisms in datomic that could be leveraged].
Thanks!#2021-04-1912:17tatutafaict, there is no similar functionality in cloud#2021-04-1912:25prncEven if there is no filter fn as such I would be interested how you should go about restriction user access (say only to “their own” data) in e.g. queries? I’m just new and not sure what the right approach is 😉 Is it just “do it in every query explicitly” kind of thing?#2021-04-1912:51thumbnailIn our system we only check the authorization (i.e. access) at the borders of the system. When you're passed that interceptor/layer we don't bother. That way the queries are simpler and faster (at the cost of always running one (or more) queries beforehand.#2021-04-1913:14Joe LaneQueries are data and compose, programmatically building queries is a normal thing to do :)#2021-04-1913:57shieldsGood example at the 12 minute mark of how Nubank controls ownership and authorization w/ their schema using rules.
https://youtu.be/7lm3K8zVOdY#2021-04-1914:03prncI’ve started drafting an impl of what I need with rules—will definitely check out the nubank talk as is seems to go that direction. Thanks everyone!#2021-04-2004:49tatutwe too have authorization rules at the boundary, all commands and queries from clients are authorized… I think it’s common that authorization is based on a few core entities that are part of the query args / command payload{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 4")}
#2021-04-2006:45tatutthat nubank presentation with filter was interesting, it would be to see actual performance numbers… it would seem to me that the filter approach can’t use indexes efficiently, but don’t know any details#2021-04-2015:10prncThanks @U11SJ6Q0K!
Being new to datomic this is quite an interesting problem for me.
The filter solution (on-prem only!) seems to be very clean, and it’s what nubank are using. Query logic is separated from access restrictions.
Leveraging “database as a value” and thus playing to datomic strengths.
The other nice thing about datomic is ofc “queries are data” as mentioned by @U0CJ19XAM
I’ve just used this approach to write a boundary query-as-a-user fn, which rewrites the queries to refer to access restriction rule.
Somehow it doesn’t really feel right, less separate, less explicit. Maybe it’s just my shoddy impl 😜 We will see how it fares in practice! If anyone has examples of those kinds of things in real world (w/ users, permissions, auth etc) open source code I would love to see them! Cheers!#2021-04-2015:16Joe LaneHow complex is your permission model?#2021-04-2015:34prncATM I’m only concerned with the ~trivial case of: “Alice owns some resources and only Alice can see them” where resources are entity level, so not very complicated and not very granular, but that will probably evolve. It’s a knowledge base type of application so where this will potentially get more interesting is in the cross-sections of public and private information--private knowledge graphs embedded in public ones. So I’m just trying to model this in a nice way with datomic without prior experience with datomic ;)
Presently I was mainly concerned with the simple mechanics i.e. where those restrictions should sit (in terms of best practices), so it’s not error prone.
So today I’ve just added a rule and a centralised query fn for restricted resources that just adds those access constraints captured by the rule to the :where clause... so far so good I think ;)#2021-04-1912:25prncEven if there is no filter fn as such I would be interested how you should go about restriction user access (say only to “their own” data) in e.g. queries? I’m just new and not sure what the right approach is 😉 Is it just “do it in every query explicitly” kind of thing?#2021-04-1918:03cjsauerHas anyone experimented with defining “virtual attributes” in the db schema? Meaning, attributes that are not meant to be involved in the transaction of data, but that only serve as documentation, or to make the system even more self-describing. For example, I might include :product/href in my schema as a card-one string type attribute, but only ever derive its value on-demand. I’m finding that describing the system fully in the schema gives you really great “meta-query-ability”, but I’m wondering if there is any downside to doing this.#2021-04-1918:05cjsauerI’ve seen quite a few other projects that do this out-of-band in code. But it struck me that this type of metadata really belongs in the schema proper. It could even allow datomic to help you with making backwards compatible changes by warning you if you break schema (?)#2021-04-1918:07Joe LaneWhat your describing https://docs.datomic.com/on-prem/best-practices.html#annotate-schema @cjsauer!#2021-04-1918:08Joe LaneThere are limits to how many schema elements you can have, but that https://docs.datomic.com/on-prem/schema/schema-limits.html#2021-04-1918:09Joe LaneI've seen migration tools built using this capability, diagramming tools, etc.#2021-04-1918:11cjsauerAha, cool! For a moment it felt odd to be capturing schema about data that I don’t intend to ever store on disk, but after pondering a bit, why should the durable storage have a monopoly on the schema definition??#2021-04-1918:12Joe Lane... I mean, it is durable. These attributes are the same data as would be in transactions and are stored in durable storage.#2021-04-1918:12Joe LaneDatomic is built out of itself#2021-04-1918:13cjsauerTrue true. I mean to say that I never intend to actually transact a value of that attribute onto domain data. Of course the schema itself is stored durably.#2021-04-1918:16cjsauerAnother way to put this: essential state shouldn’t have a monopoly on the schema. So it’s not really about its durability. More its use in the system.#2021-04-1918:16Joe LaneMakes a nice compile target too 😉#2021-04-1918:17cjsauerYea definitely#2021-04-1921:19ghaditxacting data about attributes such a useful thing to do ^^^^^^{:tag :div, :attrs {:class "message-reaction", :title "100"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💯")} " 4")}
#2021-04-2005:05steveb8nQ: about creating a global datomic cloud service i.e. multi-regions. does anyone have experience with this? https://forum.datomic.com/t/replication-of-some-all-datoms-to-other-dbs/1779#2021-04-2116:16jaretHey @U0510KXTU sorry I missed the post. I just updated on the forum thread as slack gets archived. I am just hoping to understand the problem space a little bit better before I jump into cross region advice/approaches.#2021-04-2123:11steveb8n@U1QJACBUM thanks. I’ve replied there too. better for it to be in a permanent record for others.#2021-04-2008:09Lennart BuitUnderstanding question: If you pull an existing attribute (that isn’t :db/id) of a non-existing entity, you get nil, but if you pull :db/id of that same non-existing entity, you get {:db/id nil} . Why is that?#2021-04-2204:52tatutQuerying “entity with greatest some-attr and pulling from that”, I’ve been using not-join to query that there isn’t another entity with a greater value and was wondering if there is a more convenient pattern I’m missing?#2021-04-2207:23Lennart Buit{:find [(max ?some-attr) (pull ?e [*])] …} perhaps, or am I now missing something#2021-04-2207:34tatutthat returns multiple result rows… each the max and each ?e#2021-04-2208:02tatutI only care about the ?e with the max value and want to pull from that, so result should have single row#2021-04-2208:03tatutI just figured, that instead of not-join I can bind the (max ?some-attr) to value and find the ?ebased on (as the values in my case are unique) EDIT: scratch that, gives different result#2021-04-2208:30tatutsplit it into two queries: first find the single max value for attr, then query the entity based on that max value… that was faster than not-join#2021-04-2213:08Joe LaneHey @U11SJ6Q0K , can you create a minimal repro of the data , schema, and the desired result using dev-local ? I’m interested in documenting optimal solutions to this but I think it will depend on quantity, use of tuples, insert/update patterns, etc :)#2021-04-2304:02tatutnot sure if I have enough cases for a general case. Looks to me that it’s just about how complex the query to get the “max of some attr” is and how then can you get the :db/id of the entity that had the max value#2021-04-2304:10tatuta max-with-id aggregate that returns the max value and the :db/id of the entity that had it, feels like would be useful#2021-04-2313:47Joe LaneIs this all that you’re doing with the query?
could you use index-pull or index-range and just grab the first element?
Could you write a query function to do that for you from within the query ( if you need to pass that value to other where clauses, for example)
Use Reverse flag for max (I think, on mobile). #2021-04-2218:32stopaHey team, noob question, I wanted to make sure I understood.
For the starter plan, What does 1 year of updates / maintenance mean? i.e what will happen after 1 year? Would I simply get another starter license, and upgrade, or something else?
Looking at this description, it seems like the difference between starter and pro is support + updates. I wanted to make sure I fully understood that this is the only difference (i.e I can have multiple boxes of datomic running, etc)#2021-04-2312:16jaret@stopachka all Datomic licenses are perpetual, meaning they will always work on versions of Datomic on-prem released prior to their expiration. Starter is intended for customers wanting to try out Datomic, after the 1 year the expected path is to move on to a PRO license to continue getting upgrades + support + HA etc. If you aren't ready after one year, you can continue using your Starter license, but you will not be able to "renew" and get new versions released after your starter licenses expiration.
> can have multiple boxes of datomic running, etc
I want to clarify that a Datomic license supports a single Datomic system which is the Transactor (pair for HA of active/standby), and all associated peers. Datomic is a distributed system so all of these processes can be (and should be) on different machines.{:tag :div, :attrs {:class "message-reaction", :title "heart"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("❤️")} " 3")}
#2021-04-2315:38stopaAwesome, thank you @U1QJACBUM !#2021-04-2315:38afryHey all, could anybody point me to a good repo on Github with an easily-grokked implementation of Datomic on-prem? I'm finding that getting over the hump of integrating the library and getting a hello-world thing going is pretty challenging.#2021-04-2411:42ccortesI had the same problem when I was trying to build a website with clojure and datomic. This https://github.com/milgra/tutorials/blob/master/full-stack-web-development-with-clojure-and-datomic.md helped me a lot.#2021-04-2412:56afryAh, this is great, thank you! This reminds me a little bit of the "Immutable Stack" series: https://youtu.be/QrSnTIHotZE#2021-04-2315:56Joe LaneHi @andyfry01, have you looked at https://docs.datomic.com/on-prem/learning/day-of-datomic.html ?#2021-04-2315:57Joe LaneAnd the accompanying github repo https://github.com/Datomic/day-of-datomic#2021-04-2316:51afryI'll take another look at it, thanks @lanejo01!
I did enjoy the lecture series, maybe I just have to be patient and pick the repo apart a bit.#2021-04-2316:52Joe LaneIf you have specific questions / problems you're hitting I'm happy to help answer those too 🙂#2021-04-2317:14afryIn that case, maybe you could explain this to me: what's the difference between datomic.api and datomic.client.api?
In hello_world.clj in the Day of Datomic repo for example, they DON'T use datomic.client.api: https://github.com/Datomic/day-of-datomic/blob/master/tutorial/hello_world.clj#L10
But in the official on-prem docs, they do: https://docs.datomic.com/on-prem/getting-started/connect-to-a-database.html#repl
I was wondering if this might be a cloud vs. on-prem thing.#2021-04-2317:18Joe LaneSure thing!
So in datomic cloud and dev-local, the only way to interact with datomic is via some form of the "Client API". On-prem is unique because it supports a version of the "Client API" and, additionally, has what's known as the "Peer Api", a separate way to interact with datomic.#2021-04-2317:18Joe Lane• datomic.api -> Peer API
• datomic.client.api -> Client API#2021-04-2317:19Joe Lanehttps://docs.datomic.com/on-prem/overview/clients-and-peers.html#2021-04-2317:23afryGotcha, that's helpful!
> If you are trying Datomic for the first time, we recommend that you begin with a client library.
Sounds like I should focus my efforts on getting up and running with the client library :thumbsup:#2021-04-2317:26Joe LaneAre you only looking at on-prem? We have a cloud offering that is simpler operationally that may fit your needs better (depending on your needs, of course 🙂 )#2021-04-2317:35afryI think I'd like to stick with on-prem for now to cut down on the complication (and expense) of using a cloud DB, but if and when I scale up to that level I'll come knocking 😛#2021-04-2317:37Joe LaneFair enough, but just so you're aware, on-prem licenses are paid upfront (once you're out of starter) while cloud is pay-as-you-go and can be cheaper in some circumstances. I'd hate for you to build out a bunch of stuff and then realize you wanted cloud instead.#2021-04-2317:38Joe LaneHave fun! I'm around if you have anymore questions!#2021-04-2317:41afryIndeed! I appreciate the help. I'm still in early days with it all, but Datalog is blowing my mind. I feel like for the first time in my life I can actually write a database query.
My MO since I first started doing software has been more or less SELECT * FROM SOME_TABLE and proceed to aggregate/filter/whatever with backend JS or Clojure.
This was a huge help for me: http://www.learndatalogtoday.org/ . It's a fantastic introduction to Datalog, and also a really good example of a real-world, relatable, and small-scale Datomic DB + schema.#2021-04-2319:53jarrodctaylorPossibly worth considering as another option is https://docs.datomic.com/cloud/dev-local.html no expense and same client api.#2021-04-2421:06LuanHello everyone, I have two entities, user and address, I was trying to use tuples to restrict a user to have only addresses with different types, but when I transact one user with address it doesn't put the address type too ":user/id+addrtype [#uuid "c8335aeb-686f-438b-bbb7-06691ac81c69" nil]". This is my tuple:`{:db/ident :user/id+addrtype :db/tupleAttrs [:user/id :address/type]}` . Is that correct or is there any alternative I can use for this problem?#2021-04-2421:26LuanIt works when I use :address/type along with :user/id and the same data nested#2021-04-2421:27Joe LaneI can't help unless I see the schema and tx-data#2021-04-2421:34LuanSorry, I misclicked, this is the schema:
{:db/id #db/id [:db.part/db]
:db/ident :address/type
:db/valueType :db.type/string
:db/unique :db.unique/identity
:db/cardinality :db.cardinality/one
:db.install/_attribute :db.part/db}
{:db/id #db/id [:db.part/db]
:db/ident :address/street
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db.install/_attribute :db.part/db}
{:db/id #db/id [:db.part/db]
:db/ident :user/id
:db/valueType :db.type/uuid
:db/unique :db.unique/identity
:db/cardinality :db.cardinality/one
:db.install/_attribute :db.part/db}
{:db/id #db/id [:db.part/db]
:db/ident :user/name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db.install/_attribute :db.part/db}
{:db/id #db/id [:db.part/db]
:db/ident :user/address
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many
:db.install/_attribute :db.part/db}
{:db/ident :user/userid+type
:db/valueType :db.type/tuple
:db/tupleAttrs [:user/id :address/type]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity
:db.install/_attribute :db.part/db}
And the tx:
{:user/id #uuid "c8335aeb-686f-438b-bbb7-06691ac81c69"
:address/type "Home" ;; it it works when I add it here
:user/name "Chris"
:user/address {:db/id (d/tempid :db.part/user)
:address/type "Home"
:address/street "St. 222"}}
#2021-04-2421:40Joe LaneValidate an experiment for me:
I really do want you to use "Foo" here for the address type
first transact:
{:db/id (d/tempid :db.part/user)
:address/type "Foo"
:address/street "St. 222"}
Then, in a second transaction, transact:
{:address/type "Foo"
:address/street "St. 123"}
Then:
(d/pull a-fresh-db '[*] [:address/type "foo"])
What do you get back?#2021-04-2421:50LuanI used "Foo" instead, using only these two transactions I get:
{:db/id 17592186045418, :address/type "Foo", :address/street "St. 123", :user/userid+type [nil "Foo"]}#2021-04-2422:02Joe LaneSo it appears that in your schema :address/type is unique like :user/id , and therefore, whenever a second address with {:address/type "Home", :address/street "Route 66"} is added to the system it would overwrite the first user's address.
Your users might find this problematic 🙂#2021-04-2422:03Joe LaneThe reason the tuple "Works" when you attach it to the user is because you have what is known as a "composite tuple", which can only read attributes from the same entity, not nested ones.#2021-04-2422:06Joe LaneHow many :address/type's you expect to have in your application?
If the answer is "A small number" you might benefit from creating some new attributes instead of an address entity.
Example:
{:user/id #uuid "123..."
:user/name "Chris"
:address.home/street "St. 222"
:address.billing/street "st. 123"
:address.work/street "Route 66"}#2021-04-2422:08Joe LaneSchema would be something like:
{:db/id #db/id [:db.part/db]
:db/ident :address.home/street
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db.install/_attribute :db.part/db}
{:db/id #db/id [:db.part/db]
:db/ident :address.work/street
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db.install/_attribute :db.part/db}
{:db/id #db/id [:db.part/db]
:db/ident :address.billing/street
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db.install/_attribute :db.part/db}#2021-04-2422:09Joe LaneThen, a user can only have 1 address of each "type"#2021-04-2422:23LuanI was trying to to this composite tuple, but I didn't know it doesn't work on nested entities 🙂 And I also was trying to model it to support another entity like Company to have an address, so I would just ref to it. I will think about this solution, thank you 🙂 .#2021-04-2422:34Joe LaneFWIW @cbluan, companies can use those same attributes:
{:db/id #db/id [:db.part/db]
:db/ident :address.mailing/street
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db.install/_attribute :db.part/db}
{:db/id #db/id [:db.part/db]
:db/ident :favorite/color
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db.install/_attribute :db.part/db}
{:db/id #db/id [:db.part/db]
:db/ident :company/name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db.install/_attribute :db.part/db}
{:db/id #db/id [:db.part/db]
:db/ident :company/employees
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many
:db.install/_attribute :db.part/db}
Then:
{:company/name "Acme
:favorite/color "Blue"
:address.mailing/street "Route 66"
:address.billing/street "123 Nowhere St."
:company/employees [
{:user/id #uuid "123..."
:user/name "Chris"
:favorite/color "Green"
:address.home/street "St. 222"
:address.billing/street "st. 123"
:address.work/street "Route 66"}
]}
Don't think in rectangles columns, instead, try to think in sets of attributes{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-04-2421:12Joe LaneWhat is the tx-data and full schema @cbluan?#2021-04-2516:44stopaHey team, are there any resources, or have there been any previous attempts at turning datalog queries into realtime queries?
Problem to figure out what queries changed based on new fact seems daunting, but potentially doable with the right hacks.
If someone has thoughts/ resources, would love to hear!#2021-04-2516:45potetmIsn’t this straightforward in principle?#2021-04-2516:47potetmGo over the txn log. Get an as-of db for each txn. Run each query. Compare to the as-of db for the next txn.#2021-04-2516:48potetmShould even be parallelizable. (partition 2 1 dbs) and run each pair of queries in a thread.#2021-04-2516:55favilaThe technique is called differential dataflow: https://www.nikolasgoebel.com/2018/09/13/incremental-datalog.html {:tag :div, :attrs {:class "message-reaction", :title "aw_yeah"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "aw_yeah", :src "https://emoji.slack-edge.com/T03RZGPFR/aw_yeah/e2b893569649f890.gif"}, :content nil})} " 2")}
#2021-04-2516:56favilaThis is one impl for datomic using some Rust parts#2021-04-2516:57favilaThe differential dataflow library and a server based on it are written in Rust#2021-04-2517:02stopaOhmagad. Very exciting, thanks team will dive deeper (re: running each query—initially assumed this would be too expensive, but looks like this essay has an implementation to do this incrementally.)!#2021-04-2518:16Joe Lane@U0C5DE6RK I’d love to know what you mean by “real-time queries” and the capability you’d like to have in your system to solve some real problem. And what the problem is :) #2021-04-2518:33stopaHey Joe, I was thinking along the lines of https://tonsky.me/blog/the-web-after-tomorrow/ , and wanted to see what would be possible to hack together today. May dive deeper this week! #2021-04-2706:49bbssI tried playing with the declarative-dataflow stuff a couple times a while back, but it's become a bit outdated now, hard to build.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-04-2613:13jdkealyIs there any way to get to use "next gen" instance types in the my-cf template
# AWS instance type. See for
# a list of legal instance types.
aws-instance-type=t2.medium
as far as i can tell, the t2 machines are dated, while trying to use next gen throws an error, it doesn't recognize the type#2021-04-2613:21kennyNo. I opened a feature request on this recently. If interested, you can vote for it there https://ask.datomic.com/index.php/604/support-for-recent-aws-instance-types#2021-04-2613:39ghadi@U1DBQAAMB may be asking about the on-prem template, not cloud{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-04-2613:39ghadiunsure#2021-04-2614:42jdkealyyes, the on-prem template#2021-04-2615:53jaretYeah, @U1DBQAAMB the on-prem template is provided as a convenience you can roll your own. Otherwise we'd have to include the new instance types in the next release with the template tooling.#2021-04-2720:38uwoWhy would this return immediately:
(bounded-count 200 (d/datoms (d/as-of db start-t) :eavt))
And this hasn't returned at all? (Well, I interrupt it after a minute.)
(bounded-count 300 (d/datoms (d/as-of db start-t) :eavt))#2021-04-2814:59uwoAfter ensuring that I had recent, matching versions of transactor and client, I was still seeing this issue. Funny enough it was on the cross-over from 223 to 224, as in it succeeds for the 223 and hangs for 224.
This only occurred with d/as-of. I have a suspicion that this may be a garbage-in/garbage-out scenario, because the start-t I provided was an #inst before my earliest transactions. When I use an #inst within the range of my db's txes it appears to work.#2021-04-2815:14uwohmm. something still a little wonky even within a valid range :thinking_face:#2021-04-2721:21ghadi@uwo on prem or cloud?#2021-04-2721:41manutter51On prem. (I’m a team-mate of uwo’s){:tag :div, :attrs {:class "message-reaction", :title "pray"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙏")} " 3")}
#2021-04-2722:56uwoMy library is ahead of my dev transactor version. I'm going to check it again tomorrow with matching versions.#2021-04-2814:59uwoAfter ensuring that I had recent, matching versions of transactor and client, I was still seeing this issue. Funny enough it was on the cross-over from 223 to 224, as in it succeeds for the 223 and hangs for 224.
This only occurred with d/as-of. I have a suspicion that this may be a garbage-in/garbage-out scenario, because the start-t I provided was an #inst before my earliest transactions. When I use an #inst within the range of my db's txes it appears to work.#2021-04-2809:38danierouxWhat is the simplest way to have cronjobs functionality on Datomic Cloud?#2021-04-2809:40tatutidk about simplest, but we’ve used cloudwatch cron events to trigger an ion lambda#2021-04-2810:05danierouxAh, did not know cloudwatch had that.
And, of course, would prefer something not-in-AWS.#2021-04-2810:30tatutdatomic cloud is always in aws#2021-04-2810:30tatutit is easier to control the cron job than to build some threads that are run on all nodes of the compute group, but ymmv#2021-04-2811:56danierouxI am definitely not syncing threads across all nodes, that seems like an invitation to screw up!
I’ll give in to cloudwatch and triggering ions, thank you#2021-04-2815:43ivanaHello. How I can get it work? I got Assert failed: All clauses in 'or' must use same set of vars, had [#{?email ?e} #{?phone ?e}]
[:find ?e
:in $ ?email ?phone ?id
:where
(or [?e :worker/email ?email]
[?e :worker/phone ?phone])
(not [?e :worker/id ?id])]#2021-04-2816:00favilanm, didn’t read carefully#2021-04-2816:00ivanaThis way seems to be working
[:find ?e
:in $ ?email ?phone ?id
:where
(or-join [?e ?email ?phone]
[?e :worker/email ?email]
[?e :worker/phone ?phone])
(not [?e :worker/id ?id])]#2021-04-2816:00ivanaAs much as I tested it#2021-04-2816:01favilayes that is correct.#2021-04-2816:01ivanaThe reason was cause of input parameters but not values?#2021-04-2816:03favilaAll “branches/implemenations” of a rule must have the same bindings. If you don’t specify bindings the parser infers them from whatever bindings it sees in each branch. If they don’t match, you get that warning#2021-04-2816:03ivanaIf I set values in place of input arguments it works with or also#2021-04-2816:04favilaor-join (and all -join variants) say explicitly what bindings there are. implementations that don’t use a binding will just not unify against it, which is what you want here#2021-04-2816:04favilayou want one implementation to unify against email and ignore phone, and the other to do the opposite#2021-04-2816:05favilaand the results from both are set-unioned#2021-04-2816:06ivanaThanks, seems that I meeting this case regulary and forget the rules every time after )#2021-04-2816:06favilaIt helps me to think of or* and and* as just syntax sugar for rules, and think about the rule I would write#2021-04-2816:08ivanaThanks again, I'l try to rethink my mental model of it#2021-04-2816:08favilae.g. this would be [[(worker-phone-or-email ?e ?phone ?email) [?e :worker/email ?email]]…] Saying (worker-phone-or-email ?e ?phone) for one impl and (worker-phone-or-email ?e ?email) for the other would be more obviously wrong#2021-04-2816:13ivanamaybe I'l just rewrite query to use direct values instead of input parameters, cause I still dont understand this -join magic perfectly#2021-04-2816:13ivanaWorking with Clojure is much understandable than with datomic )#2021-04-2815:44ivanaWhile docs example uses also different set but it works
[:find (count ?artist) .
:where (or [?artist :artist/type :artist.type/group]
(and [?artist :artist/type :artist.type/person]
[?artist :artist/gender :artist.gender/female]))]#2021-04-3010:35ivanaHello. Can I check if datomic entity have any link on it somewhow? Links may be in a lot or resources so it would be hard to hardcode all the possible attributes to check.#2021-04-3010:43tatutwouldn’t the VAET index be a good place?#2021-04-3010:44ivanaPossibly yes, I need to read some about it and how to query it#2021-04-3010:44ivanaThanks!#2021-04-3012:06tatut(d/q '[:find ?e :where [_ _ ?e] :in $ ?e] db entity-id)
#2021-04-3012:06tatutshould work also, I think#2021-04-3012:50ivanaI choosed (*seq* (*d/datoms* db :vaet eid)) and seems it works, even for cardinality many attributes#2021-04-3010:37ivanaI just want to implement a logic if there are any links I'l mark entity as deleted (by setting its field) but if there are no links - just retruct it from db#2021-04-3010:42ivanaSomething like
[:find (count l?)
:where
[?e :worker/id 1018]
[?l _ ?e]]
but this blank doesn't work...#2021-04-3011:56favilaTypo in your find?#2021-04-3011:57favilaI would expect this to work#2021-04-3011:57favila(Once you fix the typo)#2021-04-3012:52ivanathanks, my shame! it works with typo fix )#2021-04-3021:26fmnoisehi everyone, what do you think, which way of date comparison is faster in datomic query?
[(.after ^java.util.Date ?exp ^java.util.Date ?now)]
or
[(> ?exp ?now)]
#2021-04-3021:26fmnoiseI tend to think java typehinted call is faster#2021-04-3021:27fmnoisebut maybe datomic implements it under the hood for > when used with dates#2021-04-3023:33benoithttps://docs.datomic.com/cloud/query/query-data-reference.html#range-predicates#2021-04-3023:35benoit"The predicates `=`, `!=`, `<=`, `<`, `>`, and `>=` are special, in that they take direct advantage of Datomic's AVET index. This makes them much more efficient than equivalent formulations using ordinary predicates."{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 6")}
#2021-05-0109:12fmnoisethanks @U963A21SL#2021-05-0317:52Michael Stokleyi apologize if this is a naive question. if a datomic transaction fails, for whatever reason, does datomic log the failure and the data it failed to transact?#2021-05-0318:00ghadiwhat if the transaction failed to reach the transactor?#2021-05-0318:25Michael Stokleyi'm not sure whether this question is meant rhetorically? as in, would i still ask my original question if i had known to first ask, and then answer, yours?#2021-05-0318:25Michael Stokleyagain, i apologize if my question is naive#2021-05-0318:26Michael Stokleyis there a version of my original question that is not naive? that you might ask about other databases?#2021-05-0318:26ghadiI guess I’m trying to say that the domain of failures is pretty large#2021-05-0318:27ghadiAre you trying to find some debug info on a previously failed tx?#2021-05-0318:32Michael Stokleyas part of a security threat model, we're trying to ask about user auditing. if we set up our system to audit user actions to create, update, or delete datomic entities (which we have, using the log-why functionality), and we fail to persist an audit record... yeah, would we be able to recover that data.#2021-05-0318:33Michael Stokleymy intuition is that this question might be about a write ahead log, but i'm not sure#2021-05-0318:34Michael Stokleyin our case, the audit log is atomic with the actual create, update, or delete#2021-05-0318:38ghadiaudit logs need to be in "user space"#2021-05-0318:38ghadidatomic does not have entity/datom level access control#2021-05-0318:38ghadiostensibly the log-why stuff is transacted in the same tx as the CRUD#2021-05-0318:38ghadiit will all succeed or all fail together#2021-05-0318:43Michael Stokleyperhaps since it will all succeed or fail together, i can push back on this question.#2021-05-0320:03Dave ATrying to get my head wrapped around implementing a product recommendation system using Datomic. For example implementing a 'weighted graph' with a pre-calculated similarity score 'edge' between products. Or a "people who bought this item also bought these items" type approach. Anyone have experience with this or have any thoughts on an approach that might be workable?#2021-05-0320:11Joe LaneHi @UCURPS3GE, do you already have a Datomic system with this data or are you evaluating Datomic for a new system?#2021-05-0320:12Dave AThe latter -- haven't committed to Datomic yet, would like to use it for other reasons, but this one aspect I'm not clear on whether it would be a good choice#2021-05-0320:21Joe LaneDatomic Analytics is a feature in both on-prem and cloud that allows for integrating Datomic with analytics tools (Python, R, Metabase, Tableau, Matlab, etc, ) and other services (Spark, Map-Reduce, etc.) by providing a PrestoSQL (recently renamed to Trino) connector plugin.
This bridges the gap between Datomic as an ACID, transactional, system of record and the broader analytics / big data community by now supporting SQL (via Presto/Trino) which most tools support either directly or via JDBC.
With Datomic Analytics, the tables are entirely virtual, meaning the data stays in a single place, and is only realized lazily upon presto queries.
https://docs.datomic.com/cloud/analytics/analytics-concepts.html and https://docs.datomic.com/on-prem/analytics/analytics-concepts.html#2021-05-0320:24Joe LaneAs you begin to plan a system you should reach out to us via the support channels, we always enjoy meeting our customers and helping them succeed!#2021-05-0322:19Dave AThanks for the reply Joe. My product data sets are pretty well-defined, so I'm hoping I can get a reasonably good result with running graph-like traversals to find similar products in Datomic, vs. using a full-blown separate ML solution or service like AWS Personalize. This post https://hashrocket.com/blog/posts/using-datomic-as-a-graph-database provides some hope of achieving that with decent performance. Just wondering if anyone else has worked on this particular problem.#2021-05-0320:21Joe LaneDatomic Analytics is a feature in both on-prem and cloud that allows for integrating Datomic with analytics tools (Python, R, Metabase, Tableau, Matlab, etc, ) and other services (Spark, Map-Reduce, etc.) by providing a PrestoSQL (recently renamed to Trino) connector plugin.
This bridges the gap between Datomic as an ACID, transactional, system of record and the broader analytics / big data community by now supporting SQL (via Presto/Trino) which most tools support either directly or via JDBC.
With Datomic Analytics, the tables are entirely virtual, meaning the data stays in a single place, and is only realized lazily upon presto queries.
https://docs.datomic.com/cloud/analytics/analytics-concepts.html and https://docs.datomic.com/on-prem/analytics/analytics-concepts.html#2021-05-0406:55tatutin datomic cloud it seems retractions without value work [:db/retract <eid> :some/attr] to retract any current value(s)… I was surprised by this as the tx data reference documentation doesn’t mention this possibility#2021-05-0407:00furkan3ayraktarIt was added more than a year ago in https://docs.datomic.com/cloud/releases.html#616-8879. Could be valuable to update the documentation page.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-05-0407:03tatutgood to know#2021-05-0415:17wegiHi there. We are currently setting up datomic on-prem to use. And one thing that is not entirely clear is: How do you handle datomic-pro in the CI? We have several CI tasks, that run a lot through the day (depending on the number of pushes). Downloading datomic-pro with the same credentials every time is surely not a good practice, or is it?#2021-05-0415:21favilaIs this for peers or the transactor? less specific advice: CIs often provide a file caching mechanism. E.g. CircleCI restore_cache and save_cache#2021-05-0415:22favilawe don’t really need a transactor in any of our CI processes, so this may just be a non-problem?#2021-05-0416:47thumbnailWe either use a docker image which we prepare in advance and use for multiple projects. For other apps we use the in-memory feature (although I think that's deprecated since dev-local).#2021-05-0419:55souenzzohttps://github.com/vvvvalvalval/datomock#2021-05-0608:14wegiThanks for the answers :thumbsup:#2021-05-0516:17joshkhlet's say i have one million entities in my database that represent a simple checkin log of the guests at my hotel (business is booming), and i want to find the latest checkin date of all guests. is there a more performant method than using the max aggregate? in my test case it takes ~4 seconds.
(d/q '{:find [(max ?d)]
:in [$]
:where [[?g :guest/checkin ?d]]}
db)
"Elapsed time: 4411.448713 msecs"#2021-05-0516:27joshkhi thought the indexes might have something to do with it. this seems like a better option
(first
(d/index-pull
(d/db sort-conn) {:index :avet
:reverse true
:selector [:guest/checkin]
:start [:guest/checkin]}))
"Elapsed time: 524.218481 msecs"#2021-05-0516:29Joe LaneAre you timing this operation from your laptop through the client api + socks proxy to a cloud system?#2021-05-0516:31joshkhyup, very crude measurement. though i thought max runs on the query group, so i'm just gaining* time on the i/o of the query result, right?#2021-05-0516:32Joe Lane500ms is at least an order of magnitude more time than you would experience from within an ion for your d/index-pull operation.#2021-05-0516:33joshkhah yes, i suspect index-pull will be much faster when deployed#2021-05-0516:34joshkhi was still a little surprised to see the max version take so long though#2021-05-0516:36Joe LaneIs this "latest checkin for each guest" or "most recent checkin of any guest"?#2021-05-0516:38joshkhthe most recent checkin of any guest#2021-05-0516:41Joe LaneTry https://docs.datomic.com/cloud/query/raw-index-access.html#index-range if you only need the value.#2021-05-0516:44Joe LaneTBH, in your measurements the network will dominate the time, assuming your data is already cached.#2021-05-0516:50joshkhthat is true, i'll run some more accurate tests in the vpc. still from my local repl i return "simple" queries in under 100ms so i don't think the network is blame for the four second return from the max aggregate. i understand that sticking with the index-pull/index-range is the best scenario anyhow, but i'm still curious why the max aggregate is so much slower than the index pull. do you know?#2021-05-0516:51Joe LaneYou're asking it to do significantly more work.#2021-05-0516:52Joe Lane(dotimes [_ 10]
(time (d/q '{:find [(max ?d)]
:in [$]
:where [[_ :guest/checkin ?d]]}
db)))#2021-05-0516:52Joe LaneNote the _ . What is the output of that?#2021-05-0516:54Joe LaneActually, I don't think it matters. The query has to consider all ?d values and thus load them all from storage, put them in memory, etc.
Asking the index only considers the first (or last) element and it's done.#2021-05-0516:54Joe LaneSo, yeah, query is doing more work here.#2021-05-0516:54joshkhthat's what i thought. in one case i'm pull and sorting one million values, and in the other case i'm just stepping along the index and stopping at the first/highest* value in my case (when :reverse true anyway)#2021-05-0516:54joshkh"Elapsed time: 3668.642766 msecs"
"Elapsed time: 4459.585873 msecs"
"Elapsed time: 4711.916313 msecs"
"Elapsed time: 4607.739472 msecs"
"Elapsed time: 5062.974172 msecs"
"Elapsed time: 4710.631811 msecs"
"Elapsed time: 4298.420077 msecs"
"Elapsed time: 4505.261057 msecs"
"Elapsed time: 5294.66838 msecs"
"Elapsed time: 4841.231095 msecs"
#2021-05-0516:56Joe LaneAnd now try the same dotimes but with
(first
(d/index-pull
(d/db sort-conn) {:index :avet
:reverse true
:selector [:guest/checkin]
:start [:guest/checkin]}))#2021-05-0516:57joshkhyup, of course it's a lot faster
"Elapsed time: 488.688867 msecs"
"Elapsed time: 383.445716 msecs"
"Elapsed time: 395.543983 msecs"
"Elapsed time: 407.429619 msecs"
"Elapsed time: 394.223538 msecs"
"Elapsed time: 425.687384 msecs"
"Elapsed time: 382.726732 msecs"
"Elapsed time: 397.136071 msecs"
"Elapsed time: 392.209467 msecs"
"Elapsed time: 414.867937 msecs"
#2021-05-0516:59Joe LaneAn important thing to look at is the variance between min and max here vs when running inside the vpc vs inside an ion. It gets pretty fast.#2021-05-0517:00Joe Lane(might not matter for you in this case though)#2021-05-0517:01joshkhin my opinion it's an interesting bit of "learned knowledge" about datomic. 🙂 i've seen my fair share of queries written to find the latest this-or-that using min and max aggregates. maybe it's just muscle memory from the SQL days:
(time (j/query pg-db ["select MAX(checkin) from guests"]))
"Elapsed time: 187.651582 msecs"
dubious test because that postgres connection is local. anyway i'll follow your advice and try out min/max from inside the vpc and from an ion. you would expect the ion version to run fastest, right?#2021-05-0517:02Joe LaneYes, but the point of the ion test is to discern "What am I really measuring?"#2021-05-0517:03joshkhbecause the ion is running in the same memory space#2021-05-0517:05Joe LaneYes, but more importantly, You're NOT measuring network overhead or auth fns or lambda coldstart, etc You're focusing on JUST the time it takes to do the query/db operation, many times, after a warmup period of issuing the query a few times to warm all the caches (if the data fits in cache).#2021-05-0518:01joshkhThanks @U0CJ19XAM, those are some very useful details#2021-05-0518:43JohnJCurious, why are you saving the data to both a rdbms and datomic?#2021-05-0518:44Joe LaneI don't think he is, I think he was using that as an example of what he was used to doing in other systems to achieve the same thing.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-05-0517:59jdkealywhere can i find docs on the map query syntax ?#2021-05-0518:04Joe LaneOn-Prem or Cloud?#2021-05-0518:06jdkealyOn-prem#2021-05-0518:09Joe Lanehttps://docs.datomic.com/on-prem/query/query.html#timeout#2021-05-0518:09Joe LaneThere is an example of passing a map to d/q, but were you talking about the map form of the query itself?#2021-05-0518:11jdkealyI'm trying to figure out how to programatically generate a datomic query#2021-05-0518:12Joe Lanehttps://github.com/Datomic/day-of-datomic/blob/20c02d26fd2a12481903dd5347589456c74f8eeb/tutorial/building_queries.clj#L65#2021-05-0518:14Joe LaneDo you have a specific scenario in mind?#2021-05-0518:35jdkealyImagine i had the schema like the above and I had optional fields :firstname and :lastname, and i would reduce over the inputs and programatically add
{:where [ ] } clauses depending on whether the key exists or not#2021-05-0518:40Joe Lane(cond-> '{:find [[?e ...]]
:in [$]
:where []}
lastName (update :in conj '?lname)
lastName (update :where conj '[?e :user/lastName ?lname])
firstName (update :in conj '?fname)
firstName (update :where conj '[?e :user/firstName ?fname]))#2021-05-0518:41Joe LaneJust use clojure ¯\(ツ)/¯#2021-05-0518:44jdkealyawesome 🙂 thanks#2021-05-0608:26danmCan anyone give any rough expected timings on Datomic Cloud operations (primarily datomic.client.api/q and datomic.client.api/transact). We're seeing about 60ms for q and 40ms for transact in the best case, which doesn't seem unreasonable given the overhead of an HTTP connection from the client to transactor etc, especially given we're going over a VPC boundary via a VPC endpoint pointing to the NLB (we didn't want to have to recreate all our existing infra, and Datomic Cloud templates require creating a new VPC, plus we're expecting to need to distribute the DBs across multiple clusters in future anyway).
But it'd be useful to know if that was correct or if most people are seeing much lower timings. Especially if the VPC endpoint link is a probable cause for that.#2021-05-0615:01kennyThe canonical answer here is "it depends" 🙂#2021-05-0610:05hanDerPederwhats the idiomatic way of modelling an ordered collection? an attribute with many cardinality does not enforce order, right? do you model each item in the collection as an entity with next/prev attributes? any helpers for this or do people just roll their own linked list?#2021-05-0610:14tatutthere’s a thread about this https://forum.datomic.com/t/handling-ordered-lists/305/3{:tag :div, :attrs {:class "message-reaction", :title "pray"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙏")} " 2")}
#2021-05-0610:15tatutbut I don’t think there’s a “one size fits all” solution, it depends on what you need#2021-05-0610:16hanDerPederthanks, just wanted to double check I wasn’t reinventing the wheel here#2021-05-0610:17tatutI find that users usually want things either alphabetically or chronologically (or sorted on some column for table listings)… so explicit order needs are luckily rare in my experience#2021-05-0610:20hanDerPedermy use case is a task-list the user has prioritised herself. so order is kind of the point 🙂 storing an index/priority attribute seems to be the way to go#2021-05-0611:40joshkhi've stood up a new Query Group to be used for Datomic Analytics, but after the stack updated i no longer see my catalogue in presto. we were previously using the default Analytics Endpoint and our catalogue was available for queries.
1. created a new Query Group called test-analytics
2. opened the compute stack CF template and entered the Query Group name test-analytics for the Analytics Endpoint field value
3. saved the stack and waited for the deployment to complete
presto> select * from system.metadata.catalogs;
catalog_name | connector_id
--------------+--------------
system | system
(1 row)
the catalogue existed when Analytics Endpoint value was empty (defaults to system name), and queries worked fine.
i've also tried restarting the gateway and resynchronizing the metaschema with no luck. i can see that the the gateway itself is running via SHOW TABLES FROM system.runtime;
any tips for troubleshooting this? thanks!#2021-05-0611:54Joe Lane@joshkh You need to pass the endpoint url for the QG not the name of the QG. #2021-05-0612:28joshkhthanks, i was thrown off by the parameter description that says to provide the query group name.
> Provide the name of a query group if you'd like analytic queries to go to a different endpoint. Defaults to system name.
the endpoint url the EndpointAddress parameter from the query group CF output, right?
.<query-group-name>.<region>.
i've tried this as the Analytics Endpoint value in the compute stack template and still have the same problem of the missing catalogues.#2021-05-0612:36Joe LaneHave you done the Cli sync dance?
Those nodes wouldn’t have the catalogs in them unless you did. #2021-05-0612:37joshkhi'll try again, maybe i did things out of step this time 🙂#2021-05-0612:49joshkhwe tried resyncing the metaschema again, no luck#2021-05-0613:09Joe LaneHmm. Can you open a support case so we can track this on official channels?#2021-05-0613:12joshkhabsolutely, thanks for the first attempt Joe!#2021-05-0612:58xcenoI need some advice regarding storage:
I've got some multi-dimensional vectors holding intergers or doubles (dtype-next tensors to be specific) and I wonder how to save those in Datomic.
What I thought about is to save a tuple of the dimensions, say :some-ns/shape [3 5 4] and store the raw byte-buffer of a tensor along with it. I don't need to query the contents of it and would disable the history for it too. Is that a viable idea or should I rather serialize it and store the blob?
If it's the latter: Are there examples of using an S3 bucket from inside a datomic ion?#2021-05-0705:22tatutthe java sdk or cognitect aws-api work for s3 access#2021-05-0705:22tatutjust give the compute group ec2 role permissions to the bucket#2021-05-0714:24xcenoOkay I went with S3 and aws-api. That was way easier than I thought it would be, thanks!#2021-05-0709:21danmAre there docs anywhere about the expected CPU use of queries vs transactions? Our current setup doesn't yet have query groups, and we're performing a lot more writes (i.e. transacts) than we are queries. I'm seeing CPU hitting 98+% on the transactors, and then everything falls over. I'm curious if creating a query group to offload the queries could/would drop CPU on the transactors a lot more than the ratio of queries/transacts would suggest, because maybe queries are a lot more CPU intensive?#2021-05-0710:47danmAlso, is there documentation anywhere on all the standard graphs on the Datomic Cloud dashboard? Like, TxBytes. Is that a per second average or an aggregate of all the data transmitted since the last datapoint? I'm assuming the latter, as changing the dashboard period, and therefore the interval between datapoints, alters the value significantly.#2021-05-0713:27danierouxA wish question (I wish-and-hope-this-exists):
Does anyone have something that allows me to edit a Datomic cloud database as a spreadsheet? Or as a simple CRUD app?
We have a bunch of static information that we display to the internal users on Metabase - and they want to change the values they see.#2021-05-0714:41respatializedhttps://github.com/hyperfiddle/hyperfiddle
This may be what you're looking for!#2021-05-0714:43mafcocincoIn Datomic, what is the best-practice way to model this relationship: object A contains references (i.e. many instances) of object B and we want a field in object B to be unique within the context of object A. From the documentation, it does not seem like :db/unique (either with :db.unique/identity or :db.unique/value), by itself, is appropriate. Wondering how to correctly model this constraint within the Datomic Schema.#2021-05-0714:55Joe Lane@U6SN41SJC Look into using :db.unique/identity tuples for this, either heterogeneous or composite.
Also, depending on how many "many instances" is, maybe B should point to A ?#2021-05-0714:57mafcocincoTrue. It doesnt matter which direction the index points and that would probably be easier.#2021-05-0714:58Joe LaneHow many is "many instances"?
The answer to which direction it should go depends on the required selectivity of the access patterns. Again, all predicated on "many instance" 🙂#2021-05-0718:58mafcocinco10 or less.#2021-05-0718:59mafcocincoas a guess. A is an environment for our testing platform and B is the meta data for each service that will be tested in that environment. Our platform currently consists of ~8 services and I don’t see that number going up significantly.#2021-05-0719:00Joe LaneThen performance doesn't matter here and you should do whatever is most convenient for you. That entire dataset will fit in memory, yay!#2021-05-0717:53kennyIs there a way for me to know which Datomic Cloud query group node a client api request went to?#2021-05-0717:56ghadixy problem#2021-05-0717:57ghadigroans "what are you actually trying to solve?"{:tag :div, :attrs {:class "message-reaction", :title "boom"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💥")} " 3")}
#2021-05-0717:58kennyActually lol'ed 🙂 Knew this was coming.#2021-05-0717:57ghadi🙂#2021-05-0717:59Joe LaneI'm sensing a new precursor to "Everybody drink"{:tag :div, :attrs {:class "message-reaction", :title "laughing"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😆")} " 6")}
#2021-05-0718:00kennyWe are receiving ~20 datomic client timeouts all on the exact same d/pull call within a 3 minute window, which is surprising because that call doesn't actually pull that much data. I was curious if the node those client api requests went to was overwhelmed.#2021-05-0718:02Joe LaneCheck your dashboard, do you have any throttle events?#2021-05-0718:02kennyNot at that time. The query is set to a 15s timeout and it's hitting that on every one of those calls.#2021-05-0718:03Joe LaneI thought it was a pull?#2021-05-0718:04kennyIt's a query with a pull. e.g.,
(d/q {:query '[:find (pull ?p [* {::props-v1/filter-set [*]}])
:where
[_ :customer/prop-group1s ?p]]
:args [db]
:timeout 15000})#2021-05-0718:09Joe LaneWere these against the same database?#2021-05-0718:11kennyAll but 2.#2021-05-0718:11ghadidoes that same exact pull call happen at other times of the day?#2021-05-0718:11kennyYes#2021-05-0718:11kennyThat query will always return a seq of 3 maps with < 20 total datoms.#2021-05-0718:12ghadihow long does it ordinarily take outside the problem window?#2021-05-0718:12kenny< 200ms#2021-05-0718:12ghadicool cool...#2021-05-0718:12kennyavg maybe 50ms.#2021-05-0718:14ghadican you launch that pull concurrently (futures / threads) and reproduce the issue?#2021-05-0718:16Joe LaneTry ^^ against a different QG of size 1 and look at its dashboard. #2021-05-0718:17ghadione of the lovable perks of infinite read scaling#2021-05-0718:17ghadithat will at least tell you if the synchronicity is significant#2021-05-0718:20Joe LaneMaybe your on-demand DDB table wasn’t provisioned for that demand?#2021-05-0718:50kennyFrom looking at the query group dashboard, I can see that the group was overwhelmed at the time. min cpu of 99 & max of 100. There were only 2 nodes in the group. I also observe that at lease one other query resulted in 50.4k count. The overwhelmed system simply manifests itself in those frequency, but small queries. Thinking the fix is to scale the system up at the time of the 50.4k query.
Separately, does the Query Result Counts graph show the number of datoms a query returns or something else?#2021-05-0718:53Joe LaneThat graph show the number of results not datoms. A result can be many datoms#2021-05-0718:54kennySo if that query is pull'ing in the :find, it could actually be some scalar * the reported number?#2021-05-0718:56Joe LaneAssuming all the results are uniform, yes, that many datoms would be returned. Datoms isn't really the right measurement here though.#2021-05-0718:57kenny"that many" is scalar * reported number, assuming uniform?#2021-05-0718:59Joe LaneIf I know each pull returns exactly 3 datoms, then the returned datoms is:
reported number * 3 = "that many datoms"{:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 2")}
#2021-05-0719:11kennySo I can reproduce the query result by calling count on the result of d/q?#2021-05-0719:12Joe LaneYep#2021-05-0719:50kennyd/pull is not included then?#2021-05-0718:53Joe LaneInstead of scaling the qg up, can you make a separate QG for that other query so they don't affect each other?#2021-05-0718:56kennyYes, that is an option. I'd like a bit more data on which queries are causing that huge result set. I have a couple ideas but need more data to know how to split. Why would you tend to prefer splitting over scaling?#2021-05-0718:56kennyCaching?#2021-05-0718:57Joe LaneYep, but beyond that, these sound like different kinds of workloads.#2021-05-0718:59kennyYeah, they kind of are.#2021-05-0719:01Joe LaneIs one of them a scheduled batch job? You can always spin the QG up just for that job 🙂#2021-05-0719:01kennyAnother option I've been considering is "filling out" my query group with spot instances. It's likely that would solve this problem as well, at a fraction of the cost.#2021-05-0719:01Joe Lane"this problem" <- you know what I'm going to ask.#2021-05-0719:03kennyhttps://clojurians.slack.com/archives/C03RZMDSH/p1620410343010200#2021-05-0719:04kennyGetting timeouts due to hitting peak capacity.#2021-05-0719:05kennye.g., cpu spikes to near 100, some small number of queries timeout, then the event is over.#2021-05-0719:09Joe Lane> Getting timeouts due to hitting peak capacity
^^ That is a symptom, and we still don't know why it occurred do we?
FWIW, a shorter timeout on your pulls with retry wrapped around it would also alleviate the above symptom because the request would (eventually, but how unlucky can you be?) be routed to a different node.#2021-05-0719:11kennyFair. My hypothesis is those 50.4k queries. I'm betting there are multiple of them.#2021-05-0719:12kenny& there's only 2 nodes in the group at the event time. So if both nodes are processing 1+ 50.4k queries, perhaps pretty unlucky.#2021-05-0719:14Joe LaneSo there are only 2 nodes in the QG and there are 2 queries returning 50.4k results being issued at the same time?#2021-05-0719:50kennyI don't know for certain since I don't have that data instrumented right now but, yes it is likely. There's up to 5 queries that could all run in the same 10s window that are of that size.#2021-05-0723:40kennyDatomic Cloud currently uses the older launch configuration setup in creating ASGs so a mixed group of Spot & On-Demand is not possible 😢 I created a feature request here: https://ask.datomic.com/index.php/607/use-launch-template-instead-of-launch-configuration.#2021-05-1013:37thumbnailI noticed some surprising behaviour with on-prem datomic client 1.0.6202; only the first collection binding will be resolved as entities.
I have a query with two collection bindings containing lookup refs; the second collection binding is not resolved as entities. I can work around it by adding [(datomic.api/entid $ ?y) ?z] to my :where-clause, or by manually constructing a relation binding. Is this to be expected?#2021-05-1013:43Joe LaneHey @UHJH8MG6S could you map this problem to our new guide on https://docs.datomic.com/cloud/tech-notes/writing-a-problem-report.html ?#2021-05-1014:11favila@UHJH8MG6S Out of curiosity, is the binding that isn’t resolved first used in a clause where the attribute is not statically known?#2021-05-1014:12favilaUnless the query planner sees a static pattern like [_ :literal-ref-attr ?y] It doesn’t know that “?y” could possibly be resolved to an entity id. (IME)#2021-05-1014:12favilaso it just tries to match what you literally passed in#2021-05-1015:16thumbnail@U09R86PA4 I use a rule like this: (my-rule? ?x ?y), where ?x and ?y are both expected to be bound, and provided using coll-bindings. The rule is effectively [?x :attr ?y]. When I execute the query an error is thrown (`[?y] not bound in clause: /cdn-cgi/l/email-protection`). When I reverse my :in-arguments, the error changes to [?x] not bound in clause: .
@U0CJ19XAM I'll try to extend with a repro later :thumbsup::skin-tone-2:#2021-05-1612:46thumbnailHere's the repro of this behaviour using dev-tools 0.9.232.
In this example ?parent is bound, ?child is not. I expected both to be bound. (see https://clojurians.slack.com/archives/C03RZMDSH/p1621188243127100?thread_ts=1620653821.045200&cid=C03RZMDSH)#2021-05-1616:56favilaYour destructuring doesn’t make sense to me. What is ?e2 supposed to be? #2021-05-1616:58favilaThe only choices you have for destructuring are to take the value as a whole unchanged, to take it as a collection of single items, or to take it as a collection of relations.#2021-05-1617:01favilaI think you either want args [db [ref1 ref2]] :in [$ [?e ...]] or args [db [ref1][ref2]] :in [$ [?e...] [?e2 ...]]#2021-05-1617:02favilaYou are doing args [db [ref1 ref2]] :in [$ [[?e ...][?e2 ...]]]#2021-05-1617:03favilaThe query parser isn’t catching this as a syntax error but it should#2021-05-1617:42thumbnailYou're right; when i initially reported this issue I used
[db [ref1][ref2]] :in [$ [?e...] [?e2 ...]]
instead of the syntax in the repro. Let me get an example ready.#2021-05-1618:04thumbnailI updated my example to be more in line with my original report. Sorry for the noise 😅#2021-05-1714:04Joe Lane@UHJH8MG6S If you pass in sets of eids instead of lookup refs it appears to do what you're after.
(d/q {:query '{:find [(pull ?parent [:person/name]) (pull ?child [:person/name])]
:in [$ [?parent ...] [?child ...]]
:where [[?child :person/parent ?parent]]}
:args [(d/db conn)
(into #{}
(map #(->> % (d/pull (d/db conn) [:db/id] ) :db/id)
#{[:person/name "pete"]}))
(into #{}
(map #(->> % (d/pull (d/db conn) [:db/id] ) :db/id)
#{[:person/name "frank"]}))]})
;; Returns
;; => [[#:person{:name "pete"} #:person{:name "frank"}]]{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 6")}
#2021-05-1714:11thumbnailNice! I guess I can also use datomic.api/entid in the query to resolve the lookup refs. I initially expected both collection bindings to resolve the lookup refs, but it seems to do 1 collection at most. Is that intended?#2021-05-1714:17Joe LaneI'll be creating an internal story to investigate this more deeply but for now you should consider it expected behavior.{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 3")}
#2021-05-1016:30joshkhis it generally acceptable for a query group to communicate with another query group? for example, one query group behind http direct that sources data from other (non internet facing) query groups?#2021-05-1016:33Joe LaneWhat does ".. sources data from... " mean?#2021-05-1016:34joshkhperhaps via http, where each query group is a microservice#2021-05-1016:36Joe LaneDo you need a database for your internet-facing QG?#2021-05-1016:38Joe LaneWhat is pushing you towards "microservices"?#2021-05-1016:47joshkh> Do you need a database for your internet-facing QG?
i think so, yes. basically i have a few query groups all independently accessible via http-direct and serving their own rest APIs. i would like those query groups to remain as separate applications, but use only one exposed query group to handle my API requests and fetch data from other query groups as needed.
for example: i have an existing query group that handles the customer api, and another existing query group that handles the billing api. i'd like all api requests to be routed through a single query group that will fetch data from the other query group APIs to build a response#2021-05-1016:50joshkh> What is pushing you towards "microservices"?
maybe microservice is too strong of a word here. the problem i'm trying to solve is that i have some independently managed APIs on different query groups, but a need to have information from many of them to make a proper decision about a response to the client. does that make sense?#2021-05-1016:55Joe LaneHave you considered using those n-other services as libraries in your new internet-facing application?
This way you can:
• avoid a network hop
• retain all data in process
• return large result sets without placing a burden on those other services (you're avoiding fan-in)
• Avoid needing to call d/sync or d/as-of to have a consistent view of the database
• Autoscale your internet facing application independently of those other services
• etc...#2021-05-1017:06joshkh> Have you considered using those n-other services as libraries in your new internet-facing application?
absolutely. we actually started that way but at the time (a few years ago) Ions had a very low max timeout on the deployment health check so we ended up splitting the APIs.
services-as-libraries definitely has their benefits, and we're considering adopting polylith to help us move back in that direction. but in the case of at least one Query Group / API that we have, it must remain separate for business reasons (booo) and it's also considerably more resource intensive than the others.#2021-05-1017:35Joe LaneOk, well there is nothing inherently wrong with QG to QG communication, you could even leverage the client api to issue a query against the other QG.
You will incur additional overhead because of this pattern and you might want to measure it to make sure it works for your needs.#2021-05-1018:22joshkhgood idea. so i suppose for the one separate query group i would leave HTTP Direct enabled and make HTTP requests to.. the QG's load balancer? or would i still need to involve an API Gateway?#2021-05-1018:25Joe LaneThat QG's LB endpoint on port 8184 is sufficient, no need to go back out through APIGW.#2021-05-1018:26joshkhexcellent, that's incredibly helpful. thanks again for your support#2021-05-1016:33joshkhi understand that question is vague, but i'm generally aiming towards a micro service architecture#2021-05-1612:46thumbnailHere's the repro of this behaviour using dev-tools 0.9.232.
In this example ?parent is bound, ?child is not. I expected both to be bound. (see https://clojurians.slack.com/archives/C03RZMDSH/p1621188243127100?thread_ts=1620653821.045200&cid=C03RZMDSH)#2021-05-1104:52tatutin datomic cloud analytics, how do I configure username/password to the trino connector (https://trino.io/docs/current/security/password-file.html) ? the datomic documentation only talks about catalog .properties file and not the overall config.properties#2021-05-1204:17tatutany pointers on trino configuration? should I just ssh into the bastion to do changes directly#2021-05-1204:21Joe LaneIt's a bit late, but I'm not clear here what you're trying to do? Datomic Analytics (as of today) still runs presto 348, which is a pre-trino version.#2021-05-1204:29tatutok, the same config should work there… trino is just a name change#2021-05-1204:30tatutI’m trying to expose an analytics endpoint via load balancer to our customer, and need to configure options for the connector#2021-05-1204:31tatutthe datomic-cli analytics sync seems to only copy the catalog and metaschema and doesn’t have a way to do other config changes for presto/trino#2021-05-1204:33Joe LaneTrust me, it's more than just a name change 🙂
Is "our customer" a 3rd party?
What options exactly are you trying to configure for the connector?#2021-05-1204:34tatutthe username/password authentication and tls proxy config#2021-05-1204:35tatutcustomer is 3rd party#2021-05-1204:38Joe LaneHow many databases will your system have?#2021-05-1204:39tatut2 that need analytics access#2021-05-1204:43Joe LaneI'm going to have to sleep on this one.#2021-05-1204:43Joe LaneLet's reconnect tomorrow?#2021-05-1204:43tatutsure, I think we will try just ssh’ing and modifying the config.properties in our dev test environment and see that happens#2021-05-1204:45Joe LaneSomething more useful for me would be a sample configuration / project showing that you can expose a secure presto/trino server with the config you need to your customer, taking datomic cloud out of the picture entirely (just for the sample project).#2021-05-1204:47Joe LaneI'm sure ssh'ing will get it to work once, but it will probably not continue to work upon access gateway restart.#2021-05-1205:30tatutadded 2 lines to /opt/presto-config/config.properties.template
http-server.authentication.type=PASSWORD
http-server.process-forwarded=true
and then added password-authenticator.properties and password.db to /opt/presto-data/etc folder… that worked. Don’t know if that survives restart of the gw#2021-05-1113:27jcfHi all! 👋
Hope everyone is doing well today.
Is there anything special I need to do to get at Math/abs in a Datomic client query? I seem to remember things like this just working, but this would have been with the peer API…
[:find ?va
:where
[_ :foo/long ?v]
[(.doubleValue ?v) ?vd]
[(Math/abs ?d) ?va]]
When I try to execute my query I get an exception, so maybe I need to declare a dependency in my query. I'll perusing the docs now.
1. Caused by clojure.lang.ExceptionInfo
Unable to load namespace for java.lang.Math/abs
#:cognitect.anomalies{:category :cognitect.anomalies/not-found,
:message
"Unable to load namespace for java.lang.Math/abs"}
require.clj: 53 datomic.core.require/anomaly!
require.clj: 51 datomic.core.require/anomaly!
require.clj: 67 datomic.core.require/default-resolver/fn
require.clj: 64 datomic.core.require/default-resolver
require.clj: 57 datomic.core.require/default-resolver
require.clj: 79 datomic.core.require/resolve!
require.clj: 74 datomic.core.require/resolve!
datalog.clj: 1342 datomic.core.datalog/resolve-qualified-fn
datalog.clj: 1336 datomic.core.datalog/resolve-qualified-fn
query.clj: 448 datomic.core.query/resolve-qualified-fns
query.clj: 445 datomic.core.query/resolve-qualified-fns
query.clj: 465 datomic.core.query/parse-query
query.clj: 452 datomic.core.query/parse-query
query.clj: 469 datomic.core.query/load-query
query.clj: 468 datomic.core.query/load-query
I'm assuming I'll need to add type hints to prevent reflection too; hoping primitive types are all good…#2021-05-1113:31Joe LaneTry typehinting that first, it may not be able to find the right method without it.#2021-05-1113:32jcf@U06FTAZV3 I have a ^long and a ^double hint in my query, and I'm seeing the same exception.#2021-05-1113:33Joe Lanepaste it again with these new hints?#2021-05-1113:33jcfCan't type hint a primitive local… I thought that might be a problem.#2021-05-1113:35Joe LaneShow me the query with the hints#2021-05-1113:36jcf'[:find (sum ?va)
:with ?e
:where
[?e :transfer/amount ?v]
[(.doubleValue ?v) ?vd]
[(Math/abs ^double ?vd) ?va]]
#2021-05-1113:36jcfI get a result when I ditch the use of Math/abs and sum the ?vd.#2021-05-1113:37Joe LaneCan you show that query as well#2021-05-1113:37jcf'[:find (sum ?vd)
:with ?e
:where
[?e :transfer/amount ?v]
[(.doubleValue ?v) ?vd]]
#2021-05-1113:38jcfThat gives me back a negative double.#2021-05-1113:38jcfclj-kondo is warning me about reflection. That's a great library!#2021-05-1113:39Joe LaneNow go for the minimal repro.
'[:find ?va
:where
[(ground 42.0) ?vd]
[(Math/abs ^double ?vd) ?va]]
#2021-05-1113:40jcfSame exception with your minimal repro.#2021-05-1113:41jcf1. Caused by clojure.lang.ExceptionInfo
Unable to load namespace for Math/abs
#:cognitect.anomalies{:category :cognitect.anomalies/not-found,
:message "Unable to load namespace for Math/abs"}
require.clj: 53 datomic.core.require/anomaly!
require.clj: 51 datomic.core.require/anomaly!#2021-05-1113:42jcfA call to Math/abs works outside of the query, which is what made me wonder if I need to whitelist the Math namespace, but I think everything in java.lang is available by default.#2021-05-1113:42Joe Lane'[:find ?va
:where
[(ground 42.0) ?vd]
[(java.lang.Math/abs ^double ?vd) ?va]]
#2021-05-1113:43jcfAdding java.lang doesn't help.#2021-05-1113:43Joe LaneHmm.. Can you open a support case for this so I can look into it?#2021-05-1113:44jcfWhere's the place to open support cases these days? http://support.datomic.com?#2021-05-1113:44Joe LaneYep, same as always#2021-05-1113:46Joe LaneWe've also got this handy format that prevents roundtrips https://docs.datomic.com/cloud/tech-notes/writing-a-problem-report.html#2021-05-1113:52jcf@U0CJ19XAM want me to log an issue for this Zendesk error too? I can't create a password because of some janky iframe stuff from the looks of it.#2021-05-1113:53Joe LaneYes, that's weird.#2021-05-1113:53Joe LanePlease include your browser details#2021-05-1113:54Joe Laneand if possible a .har file network recording (or the firefox equivalent) of the network requests made.#2021-05-1113:58jcfThe support email that gets sent out links to a different doc on what info to provide with support requests, and it's from 2016: https://support.cognitect.com/hc/en-us/articles/215581538-Information-to-provide-with-a-support-request#2021-05-1113:58jcfIt doesn't mention providing version numbers, which is probably more helpful with a problem on top of an in-memory database. 🙂#2021-05-1114:00jcfQuick fix for the Zendesk iframe jank is to open the iframe in a new tab, and then submit the form.#2021-05-1114:08jcfI've logged the support request. Thanks, @U0CJ19XAM! 🙇#2021-05-1114:08Joe LaneThank you @U06FTAZV3!#2021-05-1115:04Yarin Kessler• Hi all. So I was going through the ion-starter tutorial, and ran into the following error at https://docs.datomic.com/cloud/ions/ions-tutorial.html#test-your-connection :
- Downloading: com/datomic/ion/0.9.50/ion-0.9.50.pom from datomic-cloud
- Downloading: com/datomic/ion/0.9.50/ion-0.9.50.jar from datomic-cloud
- Error building classpath. Could not find artifact com.datomic:ion:jar:0.9.50 in central ()
◦ Here’s the project’s deps.edn for reference: https://github.com/Datomic/ion-starter/blob/master/deps.edn
◦ I was able to resolve this by adding full S3 access permissions to my IAM Datomic user, based on this tip from https://clojurians-log.clojureverse.org/datomic/2021-03-14/1615742989.291900. However, I’m not clear on why that helped. Why would giving full access to my S3 account help with locating an external jar? I’m completely new to Java/Maven/tools.deps ecosystem so feel free to ELI5. Thanks!#2021-05-1115:08Alex Miller (Clojure team)the ion jars are provided in a Maven repository hosted on s3. while the bucket is public, you must have IAM creds with access to S3 to read it#2021-05-1115:20Yarin KesslerSo my Datomic user was set up according to instructions here: https://docs.datomic.com/cloud/getting-started/configure-access.html#authorize-user
Is there a reason that setup doesn’t have the necessary S3 creds baked in?#2021-05-1115:08Joe LaneFWIW, you shouldn't need S3 full access.#2021-05-1115:21Yarin KesslerYea, I figured I don’t need full access, but I don’t know what specifically I do need? Still confused as to why giving access to MY S3 account would affect access to a bucket outside of my account.#2021-05-1115:28Joe LaneThe issue that you haven't given your user access to read ANY S3 buckets, even public ones.#2021-05-1115:43Yarin KesslerSay I were to grant universal read access via an AmazonS3ReadOnlyAccess policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "*"
}
]
}
What that means to me is that I have granted full read access to my buckets.
What would differentiate a policy that gave read access to my buckets vs a policy that gave read access to outside public buckets?#2021-05-1115:46Joe LaneYou should restrict the resource arn I believe.#2021-05-1116:19Yarin KesslerTried this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "arn:aws:s3:::datomic-releases-1fc2183a/maven/releases/*"
}
]
}
But AWS doesn’t allow it. You can’t set arn to point to outside resource AFAICT. Which means there’s no way to say “You can read S3 public buckets but not my buckets”. Which honestly aligns with my original instinct that external public buckets are public resources and so applying permissions against them makes no sense. So I’m still massively confused.#2021-05-1120:57Pragyan TripathiI started learning datomic/datalog today. I have following pull query that works:
(d/pull db '[*] [:block/id #uuid 0000000-0000-0000-000]) ;; based on unique
Now I want to another resolver that returns a vector filtered based on block/tags
I couldn’t figure out how to write pull query for that:
(d/pull db '[*] [:block/tags :button])
The sample data looks like following:
[{:db/id "block-button-id"
:block/id (uuid-from-string "block-button-1")
:block/tags [:button]
:block/display "Button Block 1"
:block/description "Button Block 1"
:block/value 101155069755482}
{:db/id "block-button-2"
:block/id (uuid-from-string "block-button-2")
:block/tags [:button]
:block/display "Button Block 2"
:block/description "Button Block 2"
:block/value 101155069755483}]
Apologies if it is a trivial question, I would appreciate any help in learning it.#2021-05-1121:12Joe LaneHi @pntripathi9417, I think you're looking for a query, not a pull.
(d/q '[:find (pull ?b [*])
:where
[?b :block/tags :button] db){:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 4")}
#2021-05-1203:49Pragyan TripathiThanks this helps.{:tag :div, :attrs {:class "message-reaction", :title "100"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💯")} " 4")}
#2021-05-1203:53Joe LaneCheck out http://www.learndatalogtoday.org/{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 4")}
#2021-05-1123:39naomarikAnything exist that's more updated than this? https://github.com/dazld/awesome-datomic#2021-05-1204:17tatutany pointers on trino configuration? should I just ssh into the bastion to do changes directly#2021-05-1216:41ennI’m trying to understand the use of the tx-data function in queries. The docs (https://docs.datomic.com/on-prem/api/log.html#log-in-query) give this example:
[(tx-data ?log ?tx) [[?e ?a ?v _ ?op]]]
That works for me as expected. But I was hoping to be able to provide values for some of those slots, like so:
[(tx-data ?log ?tx-id) [[?e :label/public-id ?v _ false]]]
This gives me results I don’t understand. ?v gets bound to an entity, not a value--specifically, the entity to which I’d expect ?e to be bound.
?e gets bound to another entity (an entity which definitely does not have a :label/public-id attr) which I wouldn’t expect to be in this datom at all.#2021-05-1216:44favilaI’m a little surprised this works at all. The ?a slot here is a number (an entity id), so I would expect :label/public-id to cause the binding to never unify with anything.#2021-05-1216:49favilaI think this should work as you expect
[(datomic.api/entid $ :label/public-id) ?label-public-id-attr]
[(tx-data ?log ?tx-id) [[?e ?label-public-id-attr ?v _ false]]]
#2021-05-1216:50ennthanks, I think that will work. I agree that it’s weird that it returns anything the other way.#2021-05-1221:07jdkealyis there a limit on how many records i can run (fulltext on ? I have a list of 46k users and datomic seems to be dying on calling fulltext on their names#2021-05-1221:10Joe LaneWhat is the full query?#2021-05-1221:11jdkealy{:find [[?e ...]],
:in [$ ?lname],
:where [[?e :ent/type "user"]
[(fulltext $ :user/name ?lname) [[?e _ _ _]]]]}#2021-05-1221:14Joe LaneAnd what happens when you remove [?e :ent/type "user"]?#2021-05-1221:40jdkealysame#2021-05-1222:25jdkealywait actually... no it's fast now#2021-05-1222:28jdkealyi don't understand why#2021-05-1223:06Joe Lane1. Find all entities ?e that have an :ent/type
2. Limit the ?e's to only those that have an :ent/type of "user"
3. Find entities that have a :user/name of ?lname and and then bind them to ?e
4. Join ?e's from both results to limit the result set further
5. Take all the resulting ?e's and return a collection of them#2021-05-1223:06Joe Lane^^ That's what your first query did#2021-05-1223:08Joe LaneYou could probably flip those two :where clauses around and it would work great.
The number of ?e's returned by [?e :ent/type "user"] is probably much larger than those returned by the fulltext clause.#2021-05-1306:47mllHi guys, I have a question about the history query: https://stackoverflow.com/questions/67514972/datomic-hides-parts-of-its-history-when-query-is-about-all-attributes Maybe someone could help me?#2021-05-1313:24jaretIt looks like you figured it out that the schema attributes had noHistory set to true. I also commented with a clarification about setting noHistory. While setting `:db/noHistory` should lessen the overall amount stored in history there are no guarantees about how much history is kept and some amount of history may be visible even for attributes with :db/noHistory set to true.#2021-05-1323:13mllThank you so much! I now realise that setting "noHistory" should be done much more carefully and maybe I didnt need noHistory at all....#2021-05-1414:37kennyDDB can return Internal Server Errors (500). Datomic will occasionally get these and return them as an anomaly that looks like this.
{:datomic.client-spi/context-id "95591228-605e-4a62-aa0d-e4a9c9c83906", :cognitect.anomalies/category :cognitect.anomalies/fault, :datomic.client-spi/exception com.amazonaws.services.dynamodbv2.model.InternalServerErrorException, :datomic.client-spi/root-exception com.amazonaws.services.dynamodbv2.model.InternalServerErrorException, :cognitect.anomalies/message "Internal server error (Service: AmazonDynamoDBv2; Status Code: 500; Error Code: InternalServerError; Request ID: 71DOKMAO3VJQ82UHRMU4MMB7HVVV4KQNSO5AEMVJF66Q9ASUAAJG; Proxy: null)", :dbs [{:database-id "f3253b1f-f5d1-4abd-8c8e-91f50033f6d9", :t 90311936, :next-t 90311937, :history false}]}
Now the weird part is according to the DDB docs on "https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.MessagesAndCodes.http5xx", you should retry these.
> You might encounter internal server errors while working with items. These are expected during the lifetime of a table. Any failed requests can be retried immediately.
Does the scenario "working with items" apply for how Datomic is using DDB? i.e., Should I be retrying these anomalies?#2021-05-1414:38ghadiAFAIK datomic does already retry these, but perhaps the retry policy was exhausted?#2021-05-1414:38kennyDatomic's retry?#2021-05-1414:40ghadiYeah, internally.#2021-05-1414:40ghadi500s can happen during dynamo partition scaling operations #2021-05-1414:44kennyIn your experience, what is the scale of the duration this event may take? Seconds, minutes?#2021-05-1414:40ghadinot uncommon#2021-05-1414:40kennyHuh, ok. Should I be retrying those anomalies from my end then?#2021-05-1414:42ghadiSomeone from datomic team can confirm, perhaps the anomaly is misclassified as a fault (unretriable){:tag :div, :attrs {:class "message-reaction", :title "ok_hand::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 3")}
#2021-05-1415:02kennyOn the topic of miscategorized anomalies, I may have another one. Our system clearly had a fun time last night... A query timeout elapsing should be interrupted, not incorrect.
{:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "processing clause: [?cloud-acct :cs.model.monitored.cloud-acct/mode :cs.model.monitored.cloud-acct/mode-fetch-all], message: java.util.concurrent.TimeoutException: Query canceled: timeout elapsed", :dbs [{:database-id "493d38a5-5434-4c1d-81c6-c1412460540b", :t 2916826, :next-t 2916827, :history false}]}
at datomic.client.api.async$ares.invokeStatic (async.clj:58)
datomic.client.api.async$ares.invoke (async.clj:54)
datomic.client.api.sync$unchunk.invokeStatic (sync.clj:48)
datomic.client.api.sync$unchunk.invoke (sync.clj:46)
datomic.client.api.sync$eval2238$fn__2261.invoke (sync.clj:123)
datomic.client.api.impl$fn__11642$G__11635__11649.invoke (impl.clj:41)
datomic.client.api.impl$call_q.invokeStatic (impl.clj:150)
datomic.client.api.impl$call_q.invoke (impl.clj:147)
datomic.client.api$q.invokeStatic (api.clj:393)
datomic.client.api$q.invoke (api.clj:365)#2021-05-1421:05Puneet AroraHello. In datomic, is there a way to register a listener for when there's a new basisTs?
(I'm accessing dataomic through a java client)#2021-05-1506:11Pragyan TripathiHello Guys. Is there a correct way to store raw edn data in datomic? Only way I could figure out is stringify the edn and parse it when reading.#2021-05-1513:03benoitYou can't store Clojure values as-is in Datomic. You have to represent them using the Datomic schema. https://docs.datomic.com/cloud/schema/schema-reference.html#db-valuetype
You can always store an EDN string using the :db.type/string type but it is limited to 4096 characters according to the docs (same link).
Also, keep in mind that you won't be able to query against the values inside your EDN string.#2021-05-1520:46souenzzoon datomic on-prem, you can use bytes + nippy#2021-05-1611:23Pragyan TripathiI figured out a way to use refs in my schema to solve my problem... Thanks 🙂#2021-05-1521:54Yarin KesslerThe following query was taken straight from Datomic docs (https://docs.datomic.com/cloud/query/query-data-reference.html#variables), but when I run it it goes into an infinite loop of continuous data output, forcing my to terminate my REPL. Any idea why?
(d/q '[:find ?name ?duration
:where [?e :artist/name "The Beatles"]
[?track :track/artists ?e]
[?track :track/name ?name]
[?Track :track/duration ?duration]]
db)
(Running this against mbrainz (https://docs.datomic.com/cloud/examples.html#datomic-samples) on local-dev instance)#2021-05-1522:10Alex Miller (Clojure team)Is that capital ?Track in there? Seems like that should be lowercase to match the others{:tag :div, :attrs {:class "message-reaction", :title "raised_hands"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙌")} " 2")}
#2021-05-1522:17Yarin KesslerYES! Thanks Alex- I never would have spotted that!#2021-05-1522:45Alex Miller (Clojure team)I dropped a note in our internal support channel to fix the docs, sorry about that#2021-05-1522:11Alex Miller (Clojure team)Might be creating an unintended free variable#2021-05-1522:12Alex Miller (Clojure team)@ykessler ^^#2021-05-1621:44joshkhmaybe someone can help me solve a mystery. i'm replaying my transaction log into a new database and ran into a snag at this datom, which is adding a fact about a reference to entity 69269232556469022. the weird thing is that 69269232556469022 hasn't appeared in the transaction log up until now
(seq (d/tx-range conn {:start 64289 :end 64290}))
=>
({:t 64289,
:data [...
#datom[67580382696205085 715 69269232556469022 13194139597601 true]
...]})
and when i query its history it seems to have never existed
(d/q '{:find [?mystery ?a ?v]
:in [$ ?mystery]
:where [[?mystery ?a ?v]]}
(d/history (d/db conn))
69269232556469022)
=> []
attribute 715 is a standard reference with history enabled by default (though i don't think that would make a difference here, since it's the entity itself that seems to be missing)
any idea what's going on?#2021-05-1622:21Joe LaneWhat is attr 715?#2021-05-1709:03joshkhi've changed the ident name of the attribute below, but other than that it looks like this
#:db{:id 715,
:ident :album/artist,
:valueType #:db{:id 20, :ident :db.type/ref},
:cardinality #:db{:id 35, :ident :db.cardinality/one},
:doc "A reference to the artist of an album"}#2021-05-1709:06joshkhi don't know if it's related but 715 in my original post would have likely been pointing to a composite tuple entity with a uniqueness constraint by identity#2021-05-1714:31Joe LaneIn the old or the new db?#2021-05-1714:33joshkhthis is all in the old db before we get to the part of replaying the tx in the new db#2021-05-1714:43Joe Lane@U0GC1C09L Is this on dev-local, cloud, on-prem?
Can you send back a report of running:
1. (d/pull (d/db conn) '[*] 69269232556469022)
2. (d/pull (d/as-of (d/db conn) 64288) '[*] 69269232556469022)
3. (d/pull (d/as-of (d/db conn) 64289) '[*] 69269232556469022)
4. (d/pull (d/as-of (d/db conn) 64290) '[*] 69269232556469022)
5. (d/pull (d/as-of (d/db conn) 64291) '[*] 69269232556469022)
You can dm me or send this to support.#2021-05-1708:17Florin BraghisHello ! I have a 3 questions about sorting. There is a fairly large collection of photos, each photo has a ‘photo/taken-at’ attribute with value type instant. I want to retrieve 100 photos ordered by taken-at potentially in reverse chronology. What is the idiomatic way to achieve that ? So far I’ve managed to do it using ‘index-pull’ which also has offset, limit and reverse options. Is this the way to do it ? Then, if I want to find a specific photo title in the oldest 100 photos, I would first retrieve the ids using index-pull, then perform a parameterized :find using those ids and the substring as parameters to the query. Alternatively, if I want to find the title in all photos, I first perform the query, then do a sort-by :taken-at on the results. So 3 questions really, but all related :). Is my thinking correct? Thank you !#2021-05-1712:16pyry1. Can't claim to vouch for a huge user base, but I'd say that is indeed idiomatic. A good choice, at any rate.#2021-05-1712:22pyry2. That should work. I guess you could also just add the photo title to the :selector and filter "by hand".#2021-05-1712:24pyry3. Again, I think you could just add the photo title to the :selector instead?#2021-05-1913:29florinbraghisThank you !#2021-05-1813:20IgnasHey. Has anyone ever tried to excise large amounts of data? We currently have 15B datoms, and are thinking of dropping roughly 1/3 - 1/2 of them.
Doing it in one go clearly would not hold, so just wondering whether there are any guidelines/benchmarks than could help us tackle this#2021-05-1815:25jaretMay I ask what is driving your desire to remove 1/3 to half your DB? Is there an underlying problem causing you to consider this option? In general, excision was not designed to cleanup old data. https://docs.datomic.com/on-prem/reference/excision.html#performance It was specifically designed to meet https://docs.datomic.com/on-prem/reference/excision.html#motivationand puts substantial burden on indexing. If you were to create a too large excision job without proper batching/testing and understanding you can potentially render the system unusable while indexing finishes. That being said, we have greatly improved the performance of excision in the most recent release and it may be possible to design a batched excision job to address. If you want to discuss further, I'd encourage you to open a support case and perhaps we can meet to chat about this topic. <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> or the website https://www.datomic.com/support.html#2021-05-1813:23Alex Miller (Clojure team)you might want to ask on https://ask.datomic.com#2021-05-1922:08cjsauerQuestion re the execution context of ions, would greatly appreciate some help: https://ask.datomic.com/index.php/610/can-i-perform-blocking-io-during-an-ion-request#2021-05-2004:12tatutfwiw we have been doing blocking IO stuff in ions and haven’t encountered any problems#2021-05-2013:42cjsauerGood to hear, thank you. Still I wish there was a bit of documentation around the ion execution context so I knew what to expect.#2021-05-2112:32joshkhi'm sure it's for a very good reason and so i'm just curious: if Ion lambdas proxy requests to the compute/query groups, then what is the reason for running them in a JVM runtime rather than something with a quicker cold start?{:tag :div, :attrs {:class "message-reaction", :title "point_up"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("☝️")} " 2")}
#2021-05-2112:35joshkhi'm also asking because quite a few of my ion lambdas are synchronous where response time matters, and there are even some hard limits in AWS (for example Cognito has a fixed 5 second timeout on its post confirmation lambda trigger). i can use lambda concurrency to solve the problem at a price 🙂#2021-05-2112:39tatuthave you tried http direct?#2021-05-2112:57joshkhi have yes, and it works really well. in this case i'm referring to (ion) lambdas that should be lambdas by design: handling Cognito triggers, glueing together Step Functions and pipelines, handlers for AppSync resources etc.#2021-05-2112:58joshkhunless i'm missing something and http direct can help there?#2021-05-2113:03tatutok, I assumed you meant web… but nevermind 😄#2021-05-2113:03tatutglad to hear http direct works well, I’ve yet to try it out#2021-05-2113:05cjsauerI’ve been wondering about this myself. HTTP direct required the prod topology because of the NLB requirement, but would it be possible to spin up a NLB manually and use it with solo?#2021-05-2117:55Cameron KingsburySo I can use
[?entity1 ?attrname1 ?attrval]
[?entity2 ?attrname2 ?attrval]
in a :where clause to get ?entity1 and ?entity2 where there exists an ?attrval that matches
---
I'm using
[(q '[:find (seq ?attrval)
:in $ ?entity ?attrname
:where [?entity ?attrname ?attrval]]
db ?entity1 ?attrname1) [[?attrvals1]]]
[(q '[:find (seq ?attrval)
:in $ ?entity ?attrname
:where [?entity ?attrname ?attrval]]
db ?entity2 ?attrname2) [[?attrvals2]]]
(not-join [?attrvals1 ?attrvals2]
[(seq ?attrvals1) [?element ...]]
(not [(contains? ?attrvals2 ?element)]))
to get ?entity1 and ?entity2 where all attrvals for ?entity1 exist for ?entity2. Is there a more performant way to do this??
(This feels like a directional "and" to the implicit "or" being applied to each attrval matching in the first case)#2021-05-2118:37Joe LaneHey Cameron, I’m on mobile now so please forgive the brevity and any possible misunderstanding.
Instead of two subqueries, try putting both of the where clauses from each subquery into one top level query and then adding a final clause of [(not= attrval1 attrval2)].
I believe there is No need for the nested subqueries, the not-join, nor the boxing and unboxing via seq and [?element ...]
I’ll try and double check this when I get back at a computer. #2021-05-2118:38Cameron Kingsburysweet! already got rid of the subqueries I think#2021-05-2118:39Joe LaneI hope I’m understanding it correctly haha#2021-05-2118:39Cameron Kingsburythe double not is used to produce an and essentially#2021-05-2118:39Cameron Kingsburyso I am not sure how it would be achieved with only the not=#2021-05-2118:46Cameron Kingsburyalso tried
(not-join [?cat ?dog]
[?cat :cat/paws ?cat-paw]
(not-join [?paw ?dog]
[?dog :dog/paws ?dog-paw]
[?cat-paw :paws/smaller-than ?dog-paw]))
but it's timing out with large numbers of paws 😉#2021-05-2118:47Cameron Kingsburyand ?cat and ?dog need to be bound, where they didn't need to be in the subqueries...#2021-05-2118:48Cameron Kingsburythe above query testing that all the paws on the cat have a :paws/smaller-than relationship with any paw on the dog#2021-05-2119:15Cameron Kingsburythis seems to be 10x slower than the original#2021-05-2119:19Joe LaneCan I see the actual, full query you're trying to run?#2021-05-2119:29Cameron Kingsburysure one sec#2021-05-2118:07uwoWhat should I make of finding tx-ids with no associated txInstant? (:db/txInstant (d/entity db (d/t->tx t-time))) ;; => nil#2021-05-2119:14uwohmm. we appear to be missing txInstants on the large majority of tx entities:
#_(let [end-t (d/basis-t db) ;; => current basis-t: 104753910
missing-tx-instant? #(nil? (:db/txInstant (d/entity db (d/t->tx %))))]
(count (filter missing-tx-instant? (range 0 end-t))))
;; => 84492058#2021-05-2119:22Joe LaneRange probably isn't what you want. The contract is that T is guaranteed to be increasing, not that it always increases by exactly 1.#2021-05-2119:23uwoha! damn. You know I kept wondering about that assumption of mine. Thank you!!!{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-05-2119:27uwosheesh :face_palm: hence datomic.api/next-t#2021-05-2119:37favilaInternally there is a single T counter incremented for newly-minted entity ids (when a tempid needs a new entity id). transaction temp ids are just one of the consumers of that counter#2021-05-2119:38favilaso there is an invariant that for all entity ids in the system, none share a T#2021-05-2119:39uwothanks for the inside scoop!#2021-05-2119:43uwoWe haven't upgraded to have qseq, so I'm having to break a tx-ids query into a set of smaller ranges. I was calculating these smaller ranges with simple arithmetic -- is that still acceptable, or do I need to ensure that the start-t and end-t handed to tx-ids are bonifide t-times?#2021-05-2119:49favilaIt should be fine, but why not something like (->> (d/seek-datoms :aevt :db/txInstant (d/t->tx start-t)) (map :e) (map d/tx->t) (take-while #(< % end-t) (partition-all 10000))#2021-05-2119:49favilai.e., just seek part of the :db/txInstant index to get the tx ids?#2021-05-2120:05uwoAhh, nice. If I understand you implication, all that would be left is for me to get the start and end of each partition as follows:#2021-05-2120:07uwoI like this approach much better, thanks!
Just to be clear though, it should be okay to fabricate a non-existent t-time near the target time when in doubt? It always appeared to work for me, but then maybe I was just being sloppy.#2021-05-2120:10favilaIt depends on what you’re doing#2021-05-2120:11favilad/tx-range, d/seek-datoms are ok with it, because they’re using your number to do array bisection#2021-05-2120:11favilad/as-of and d/since are ok with it because they’re using it for filtering.#2021-05-2120:12uwo#2021-05-2120:12uwoI was feeding each range to this ^#2021-05-2120:13favilayeah should be fine#2021-05-2120:13uwoThanks for the help!!#2021-05-2120:13favilathis would be more efficient without query though#2021-05-2120:13favilawhy not use d/tx-range directly?#2021-05-2120:14uwoSure, I'm not opposed to it. Would it just benefit readability or something more than that?#2021-05-2120:15favilaquery needs to realize and retain the intermediate result sets. that’s why you were chunking in the first place, right?#2021-05-2120:15favilad/tx-range is lazy#2021-05-2120:16uwobingo. no qseq required#2021-05-2120:16favilanot sure qseq would help#2021-05-2120:16uwoit did in my testing#2021-05-2120:16uwoif I pass a month range to the function i shared above I run out of memory before I can process it#2021-05-2120:17uwothe same didn't occur with qseq#2021-05-2120:18favilathat’s surprising because qseq doesn’t AFAIK defer any processing except pull#2021-05-2120:18Joe Laneqseq still needs to realize and retain the intermediate result sets like @U09R86PA4 is saying, it just supports lazy transformations (like pull) which can consume an enormous amount of memory when done eagerly#2021-05-2120:19uwowell, then I didn't test what I thought I was testing.#2021-05-2120:21favilaYour query is the same as this:
(->> (d/tx-range log start-t end-t)
(mapcat :data)
(filter #(contains? attr-ids (:a %)))
(map :e)
(distinct))
{:tag :div, :attrs {:class "message-reaction", :title "100"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💯")} " 2")}
#2021-05-2120:21favilaexcept this is evaluated lazily and incrementally, so memory use is bounded#2021-05-2120:24uwoWelp, color me doubly embarrassed then. I must have been testing with a larger range of time when I was using d/q than when I tested d/qseq#2021-05-2120:28uwoThank you @U09R86PA4 and @U0CJ19XAM{:tag :div, :attrs {:class "message-reaction", :title "100"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💯")} " 2")}
#2021-05-2317:09cjsauerQuestion about optional query inputs: https://ask.datomic.com/index.php/616/how-to-gracefully-handle-optional-query-inputs#2021-05-2317:45kennyOn mobile so can’t write a full form answer but for these use cases, we’ll build queries programmatically. i.e., cond-> :in and :where. {:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 3")}
#2021-05-2317:56cjsauerI got a prototype of this working just before reading this. It’s actually a bit easier than I first anticipated.#2021-05-2515:16cjsauer@U083D6HK9 do you happen to use this technique to inject authorization constraints onto queries as well? Authorization seems to touch every query in my system, and it occurred to me I might be able to append those rules onto queries using generic helper functions..#2021-05-2518:06kennyWe do not, sorry.#2021-05-2617:25cjsauerNo worries, just curious #2021-05-2408:14plexusI'm having trouble trying to query Datomic Analytics / Presto via JDBC. I have a decimal(38,2) field, and it's causing an exception in the Presto JDBC driver.
(def conn (java.sql.DriverManager/getConnection presto-url "." ""))
(let [stmt (.createStatement conn)]
(.executeQuery stmt "SELECT credit_amount FROM journal_entry_line"))
;;=>
1. Caused by java.lang.IllegalArgumentException
ParameterKind is [TYPE] but expected [LONG]
TypeSignatureParameter.java: 110 com.facebook.presto.jdbc.internal.common.type.TypeSignatureParameter/getValue
TypeSignatureParameter.java: 122 com.facebook.presto.jdbc.internal.common.type.TypeSignatureParameter/getLongLiteral
ColumnInfo.java: 194 com.facebook.presto.jdbc.ColumnInfo/setTypeInfo
PrestoResultSet.java: 1869 com.facebook.presto.jdbc.PrestoResultSet/getColumnInfo
PrestoResultSet.java: 123 com.facebook.presto.jdbc.PrestoResultSet/<init>
PrestoStatement.java: 272 com.facebook.presto.jdbc.PrestoStatement/internalExecute
PrestoStatement.java: 230 com.facebook.presto.jdbc.PrestoStatement/execute
PrestoStatement.java: 79 com.facebook.presto.jdbc.PrestoStatement/executeQuery
If I cast(credit_amount AS varchar) then it works. `getLongLiteral` looks suspicious since it's a decimal field... Not sure if the issue lies with Presto, Datomic Analytics, or the Presto JDBC driver. So I'm mainly asking: what would be the best place(s) to report this?#2021-05-2408:20plexusThis seems to be central line in the stacktrace: https://github.com/prestodb/presto/blob/2ad67dcf000be86ebc5ff7732bbb9994c8e324a8/presto-jdbc/src/main/java/com/facebook/presto/jdbc/ColumnInfo.java#L194
case "decimal":
builder.setSigned(true);
builder.setColumnDisplaySize(type.getParameters().get(0).getLongLiteral().intValue() + 2); // dot and sign
builder.setPrecision(type.getParameters().get(0).getLongLiteral().intValue());
builder.setScale(type.getParameters().get(1).getLongLiteral().intValue()); // <----- getLongLiteral -> ParameterKind is [TYPE] but expected [LONG]#2021-05-2414:56futuroI'm splitting my initial Marketplace master Datomic Cloud stack into a split-stack solo topology. I didn't provide an ApplicationName in my initial setup from the Marketplace (so the System Name is used, as I understand it); should I provide one now?#2021-05-2414:56futuroHave folks found it beneficial to provide the ApplicationName even when it's the same as the SystemName?#2021-05-2416:05pinkfrogSay I want to connect to the in-memory database of an on-prem datomic peer server. What’s the value to specify for :endpoint? (see https://docs.datomic.com/client-api/datomic.client.api.html#var-client)#2021-05-2416:14pinkfrogFundamentally, I want to perform unit test with datomic (on prem).#2021-05-2416:27pinkfrogOne issue with spinning up an in-memory peer server and connecting to it is that, the peer server listens on a TCP port. So we cannot really run two instances of the test because the two collides on the same port.#2021-05-2416:26futuroAs a docs heads up, there's an empty bullet-point at https://docs.datomic.com/cloud/getting-started/configure-access.html#authorize-gateway{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-05-2418:46cjsauerIs it a bad idea to rely on :db/txInstant as the “created at” time for an entity? The instant at which an entity’s :thing/id datom was asserted is a nice natural creation date, but I’m getting the sense that I’m abusing it a bit. For example, I can’t use index-pull to get the “10 latest” things, because that txInstant datom is on a separate entity (the transaction)…#2021-05-2418:50jcromartieIf the entity is created and submitted by an external system, it’s best to require a creation/event time as an input and to verify that is at a point in the recent past.#2021-05-2418:51cjsauerThat’s a good point. Or I suppose import jobs are another reason why one shouldn’t overload the :db/txInstant attribute. It’s really more “this is when it entered the system”, whereas creation time is a domain concern.#2021-05-2418:52jcromartie> wall clock times specified by `db:txInstant` are imprecise as more than one transaction can be recorded in the same millisecond#2021-05-2418:53jcromartieyou would want to set txInstant on imports, too https://docs.datomic.com/cloud/best.html#set-txinstant-on-imports#2021-05-2418:54jcromartieeven in systems with a RDBMS I like users of a system to provide specific times with their data and also record transaction timestamps#2021-05-2419:01cjsauerWhat if the entity is created by users? Should I be managing created-at/`updated-at` times manually?#2021-05-2419:10cjsauerAh found some good material on the matter: https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2021-05-2508:41joshkhwe tend to explicitly add dates because the historical data is not accessible to our external integrations via Datomic Analytics. also, if you ever replay your tx log from one database to another then the dates of the transactions will differ#2021-05-2512:27favila@U0GC1C09L “if you ever replay your tx log from one database to another then the dates of the transactions will differ” that’s not completely correct. the :db/txInstant assertion is in the tx log, so it will copy over unless you filter it out#2021-05-2512:29favilaThe use case for allowing this is to back-date data, but the tx/Instant of any new transaction must be >= the previous one, so this technique is limited to fresh databases#2021-05-2512:30joshkhthat's news to me, thanks for the correction @U09R86PA4#2021-05-2611:50joshkhhttps://docs.datomic.com/cloud/transactions/transaction-processing.html#explicit-txinstant#2021-05-2511:44tatutI have some [x y] location tuples in datomic, If I want to do a bounding box query (xmin,ymin - xmax,ymax) range… I think I need to untuple those and use numeric comparisons… I’m guessing it would be more efficient to model x and y as separate attributes? or can tuples be efficiently pulled from index#2021-05-2512:30favilaTuples can be pulled from index, but you’re going to get them collated by x then y#2021-05-2512:35tatutyeah, seems logical#2021-05-2512:35favilaSomething like this seems most efficient, but it may potentially read a lot of unnecessary datoms:#2021-05-2512:35favila(->> (d/seek-datoms db :avet :location [xmin, ymin])
(take-while (fn [{[x _y] :v}] (<= x xmax)))
(filter (fn [{[_x y] :v}] (<= y ymax))))
#2021-05-2512:36tatutI think I’ll change to separate x/y attributes as either one (x or y range) could be the that filters out the most#2021-05-2512:36favilaYou might also consider projecting them into a secondary database that supports an r-tree index natively#2021-05-2512:37favilayou could read the tx queue and just pump simple [attribute value entity time] rows into e.g. sqlite#2021-05-2512:37favilafor the coordinate-valued attributes#2021-05-2512:38tatutthat has more moving parts and I don’t want to do it unless I need to#2021-05-2512:38favilathat’s fair#2021-05-2605:12tatutthanks for this, it seems the d/index-range with take-while and filter was fast enough for my purposes at this time… will put heavier GIS data into PostGIS eventually that can do more geo searches#2021-05-2513:20Adriaan CallaertsHi all. I'm having trouble debugging an issue with my datomic installation. When trying to commit a transaction, I get the following exception:
Caused by: clojure.lang.ExceptionInfo: Missing keys {:missing #{:key :rev :id}}
at datomic.common$require_keys.invokeStatic(common.clj:224)
at datomic.common$require_keys.invoke(common.clj:218)
at datomic.kv_cluster$same_ref_QMARK_.invokeStatic(kv_cluster.clj:131)
at datomic.kv_cluster$same_ref_QMARK_.invoke(kv_cluster.clj:128)
at datomic.kv_cluster.KVCluster$fn__17383$fn__17387.invoke(kv_cluster.clj:227)
at datomic.kv_cluster.KVCluster$fn__17383.invoke(kv_cluster.clj:215)
at clojure.core$binding_conveyor_fn$fn__5754.invoke(core.clj:2030)
at clojure.lang.AFn.call(AFn.java:18)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
... 1 more
we haven't encountered this error before, but what changed recently was: we upgraded to datomic v1.0.6269 and we're trying a "reset environment"-script which clears the storage (MySQL in our case) and then inserts new data through the normal peer#2021-05-2513:25jaretHi @adriaan.callaerts, what does the "reset environment" script do? If it is deleting the underlying storage (SQL) for Datomic then you have corrupted the DB. Do you have a datomic level backup for this system?#2021-05-2513:27Adriaan CallaertsNo I don't. Mind you we've only tried this on a staging environment so far, so if anything is lost it's not a disaster. The goal for us is to be able to, on regular intervals, create a "fresh slate" for the entire system. I figured I could do that by dropping and recreating the database. Is there any state persisted elsewhere (in peers/transactor) that I should also be aware of and should be cleared?#2021-05-2513:29jaretNo, Datomic uses the underlying storage as a KV store. Deleting the underlying storage will corrupt the Datomic DB. I would be surprised if the transactor would even start against such a DB.#2021-05-2513:30jaretMay I recommend that you run backup at the Datomic level on your production system. https://docs.datomic.com/on-prem/operation/backup.html#2021-05-2513:31jaretRegarding your desire for a fresh state, you could also create a solution using Datomic backup/restore to create such environments#2021-05-2513:31Adriaan CallaertsMy goal is not to restore a backup here. It's to recreate the "new environment"-scenario. Shouldn't a transactor act as if it's being started for the first time when it encounters an empty (but validly structured) database?#2021-05-2513:32jaretAh, in that case you will want to follow these steps: https://docs.datomic.com/on-prem/overview/storage.html#sql-database#2021-05-2513:34jaretI actually walked through provisioning a new system on an old blog post here https://jaretbinford.github.io/SQL-Storage/#2021-05-2513:37jaret@adriaan.callaerts are you turning off all peers and transactors before calling your script?#2021-05-2513:37jaretAdditionally, what specifically does the script do?#2021-05-2513:37Adriaan Callaertsno, I might still have the transactor running. So that's probably what's causing the issue...{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 3")}
#2021-05-2513:39jaretYes, absolutely. I'd also ask, could you perhaps utilize delete-database for this solution? (note: if you do go this route you will want to delete the DB and then create a new uniquely named DB. Using the same DB quickly before all resources are cleaned up can lead to an issue.)#2021-05-2513:40jaretI am uncomfortable with anything that alters underlying storage and prefer to work at the Datomic API level if possible 🙂 (but then again, I work for the Datomic team so I am biased){:tag :div, :attrs {:class "message-reaction", :title "grin"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😁")} " 3")}
#2021-05-2514:36Adriaan CallaertsI wanted to add that I managed to get things working the way I wanted, thanks to your help. Restarting the transactor and peers did the trick!{:tag :div, :attrs {:class "message-reaction", :title "100"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💯")} " 6")}
#2021-05-2514:41futuroIs there a dependency tree for the Datomic Cloud primary compute group?#2021-05-2514:48Joe LaneWhat are you trying to do @futuro?#2021-05-2514:53futuroI recently deployed my first ion (Yay!) and ran into some dependency difficulties along the way. We're using metosin/jsonista for json parsing, and it relies on a com.fasterxml.jackson.core/jackson-core (plus a couple others) which are newer than what's in the Cloud system. Overriding just jackson-core will result in something like a Class not found exception on load, because two other fasterxml libs are expecting a class from a newer version of jackson-core. If I update all of the dependencies the load errors go away.#2021-05-2514:53futuroThat's the context.#2021-05-2514:53futuroWhat I'd like is to either learn about these conflicts without having to push (and without having to commit anything, so I can tweak and re-tweak as I learn more.#2021-05-2514:55futuroI'm also curious if there's a way to only need to consider my projects dependencies, and not the Datomic Cloud dependencies (maybe a query group with a minimal set? I'm not sure.)#2021-05-2515:11Joe Lane1. You can push an unreproducible build without committing to git and it will return a (hopefully empty) set of dependency conflicts.
2. Your ion runs inside datomic's jvm and shares a classpath so there isn't currently a way to isolate the dependencies.
3. We are unlikely to address the problem by making a QG with minimum dep set.
4. We usually update dependency versions every release.
5. Explicitly matching the same jackson dep version to other jackson deps seems to have addressed your problem, correct?#2021-05-2515:17futuro1. Can I re-use the unreproducible build name for rapid iteration, or do I need to make it unique each time (presuming I don't intend to deploy that build).
2. Yep! I didn't mention it cause I didn't want to propose solutions.
3. Hmm...ok
4. That's what I was hoping for, and why I updated our stack yesterday, but the version of jackson-core is 2.10.1, from Nov 2019, and the latest is 2.12.3 from April. Without knowing what requires this version, or why, I can't say whether going to the latest version makes "sense", but it's definitely old.
5. It has solved the issue where loading throws a Class Not Found exception, but I'm not sure how jsonista relies on the three jackson deps I've overridden, so it's plausible there are issues waiting for me down the road.#2021-05-2515:24Joe LaneThe unreproducible build policy is "latest-push-wins" aka overwrites all prior ones. So your unrepro name can be futuro-awesome-unrepro and you can bang on it all day long 🙂#2021-05-2515:36futuroHuzzah! That definitely helps the fast-iteration on dependency conflicts, thanks Joe!#2021-05-2515:37Joe LaneFWIW, we return a dependency map for you to add to your deps.edn so there are no more conflicts. Is that not working as expected for you @futuro?#2021-05-2515:44futuroSpecifically for overriding top-level deps, yes. With the jackson-core dependency I have it did not, because of three other transitive jackson dependencies that weren't overridden as well.#2021-05-2515:45futuroMy approach so far has been to review my dependency tree to see which of my deps has a dep that's being overridden, to better understand what kind of bugs might be introduced by it being overridden, or if I can downgrade that library to where it relied on the overridden dep at the version that's in the cloud system.#2021-05-2515:19Alex Miller (Clojure team)jackson is probably a transitive dep (via something like transit-clj -> transit-java) so it may be that datomic is taking newest version of top-level deps (but not transitive)#2021-05-2515:19Alex Miller (Clojure team)I know for instance that transit-java is way behind because it is not an easy upgrade to latest jackson there#2021-05-2515:21futuroThat makes sense, and mirrors my experience trying to update jackson in the past.#2021-05-2515:43futuroTwo questions in response to that:
1. Is there a roadmap/plan for updating those dependencies? (This isn't about specific timelines, but instead of prioritization and whether that it's currently being planned for or not)
2. To help improve my thinking on this, I'd love to hear how you all consider and ward against bugs caused by dependency conflicts. Very thorough unit tests? My thinking currently is to start adding tests against my dependencies, alongside my project's tests, but that doesn't quite seem right.#2021-05-2515:47Joe Lane1. See 4. above
2. Your deps will never conflict with our deps. We enforce that. If you need to test against different dep versions (of dubious value IMO) then you need to use our version of the dep in your project. A better strategy might be avoid libraries with big dep graphs.#2021-05-2515:51futuro1. I hear that, though also the system is using, somehow, a version of jackson-core from 2019.#2021-05-2515:52futuro2. Ah, my question may be better worded "I'm trying to figure out how to consider and ward against dep conflicts; how do you all think of this problem (so I may improve my thinking)?"#2021-05-2515:52futuroI've updated the question#2021-05-2516:01Joe Lane"I'm adding a new piece of equipment to my lightweight airplane and it needs to be mounted on the left side of the body. I bought these nuts from the supplier at a different time than these bolts. They are the right thread size, material and spec, but just to be sure they work, I'm considering pouring concrete around the fasteners, just to be safe" <- What I'm hearing 🙂
I realize that's a cheeky response (glad you're my friend @futuro ), but I hope it illustrates the benefit/cost ratio of YOU writing tests for your deps.
I'd rather just avoid the dep all together and stay light.{:tag :div, :attrs {:class "message-reaction", :title "joy"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😂")} " 3")}
#2021-05-2516:09futuroI think that's a mostly accurate read on the situation, and I'd also like to stay light (I'm currently working on adjusting how I deploy the ion to keep the json handling code elsewhere), but I'm not certain it's always going to be possible.#2021-05-2516:10Joe LaneIt sounds like the issue here for you is jsonista depending on jackson. Can you just use a different json library or the version of clojure.data.json that cloud depends on?#2021-05-2516:11futuroI'd say the more accurate cheeky response is "I'm adding a new piece of equipment to my lightweight airplane and it needs to be mounted on the left side of the body. I bought these nuts from the supplier at a different time than these bolts. I think they're the right thread size, material, and spec, but I'm not sure because I'm not actually working with nuts/bolts and it's more complicated than that."{:tag :div, :attrs {:class "message-reaction", :title "slightly_smiling_face"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙂")} " 3")}
#2021-05-2516:12futuroPossibly yeah, though I need to a bit more research.#2021-05-2516:13futuroI believe there's a way forward for this specific instance (complicated a bit by using a polylith architecture, which I otherwise quite enjoy), and now I'm trying to understand the issue in a more general sense to navigate this should it come up again in the future.#2021-05-2516:15futuroPartially for my own benefit, and partially because I'm the Datomic Cloud evangelist/lead on my team and want to be able to properly represent the trade-offs/risks with dependency conflicts, and how it relates to Datomic On-Prem (which I don't particularly want to spend time setting up or operating when there's Datomic Cloud around).#2021-05-2516:15Joe LaneI think this issue (in the context of Datomic) is specific to jackson and the other handful of deps that cloud depends on that could conflict with user-space.
I'm not sure you need to generalize, and I'm also not sure I can help give general advice 🙂#2021-05-2516:16futuroThat's legit.#2021-05-2516:16Joe LaneOn-prem has the same situation though with the peer library. This is just normal java classpath dependency conflicts, we just happen to tell you that we overrode your deps in cloud (as opposed to spring boot or other jvm container solutions )#2021-05-2516:16futuroThat is a very helpful piece of information, thank you.#2021-05-2516:19futuroIt's possible the "solution" is to have the Cloud deps tree and assess whether those libraries are in our critical path or not. Whether that's something y'all are open to sharing, I have no idea. And it's also possible that the solution is to build smaller ions and, if push comes to shove, use ECS/EC2/etc to run whatever our critical-path-can't-override-deps code is and have the ions talk to it.#2021-05-2516:26Joe Lane> It's possible the "solution" is to have the Cloud deps tree and assess whether those libraries are in our critical path or not.
A solution needs a problem.
> Whether that's something y'all are open to sharing, I have no idea.
Probably not.
> And it's also possible that the solution is to build smaller ions and, if push comes to shove, use ECS/EC2/etc to run whatever our critical-path-can't-override-deps code is and have the ions talk to it.
Again, what problem does "build smaller ions" solve?
"Smaller" is a characteristic, I could have 10000 deps of 1kb in size.
I highly doubt that the "optimal" solution will ever be don't run in ions because of the problem "my dep conflicts with cloud's dep" .#2021-05-2516:30futuro"Smaller" in terms of the deps tree. The "problem" being a dependency conflict that breaks a needed code path in a project dependency, which can't be resolved by downgrading or upgrading that particular dependency.#2021-05-2516:31Joe LaneDo you actually have that problem or are you hypothesizing?
If you actually have that problem you should contact support and we can help figure out what to do 🙂#2021-05-2516:34futuroI have worked around the current manifestation of this problem, and no longer have that problem (could have sworn I said as much ;) ). If I hit that problem, though, I'll reach out to support 🙂.#2021-05-2516:34futuroThanks for hashing this out with me @U0CJ19XAM 🙂#2021-05-2516:35Joe LaneCan't wait til this whole thing blows over and we can grab a pint @futuro, always fun chatting with you!#2021-05-2516:37futuroOmg, for real. 😅#2021-05-2600:32Jiezhen YiHi,
I am trying to add schema/deprecated to my schema definition following this instruction here https://blog.datomic.com/2017/01/the-ten-rules-of-schema-growth.html . Something like
{:db/ident :some/identifier,
:db/valueType :db.type/boolean,
:db/cardinality :db.cardinality/one,
:db/doc "Some doc",
:schema/deprecated true}
But got the following
Exception in thread "main" clojure.lang.ExceptionInfo: Unable to resolve entity: :schema/deprecated {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "Unable to resolve entity: :schema/deprecated", :entity :schema/deprecated, :db/error :db.error/not-an-entity, :dbs [{:database-id "datomic:", :t 7388, :next-t 7390, :history false}]}
Am I doing something wrong? Thanks!#2021-05-2601:08futuroIt looks like you need to define the schema/deprecated attribute. Once there’s a schema for it in the db you should be able to use it when defining/adding attributes to existing schema entities. {:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-05-2621:19Drew VerleeIf anyone has a suggestion on my current datomic cloud issue i would love to hear it. I'm currently just googling around the issue waiting for inspiration to strike. https://forum.datomic.com/t/how-to-troubleshoot-that-my-local-web-handler-correctly-returns-the-payload-status-body-etc-but-deployed-api-gateway-warpped-function-doesnt/1844/2#2021-05-2621:36futuroI dropped some debugging tips in that thread, I hope they help!#2021-05-2621:49Drew Verleethanks. ill give it a try. I need to even take minute and understand how to setup and read the logs...#2021-05-2621:55futuroBest of luck!#2021-05-2623:11kennyI see there's a new version of client-cloud (0.8.113) on Maven. In general, we should not update to versions not posted on the https://docs.datomic.com/cloud/releases.html#current page, correct?#2021-05-2623:45Alex Miller (Clojure team)Do you feel .... lucky?#2021-05-2700:16kennyNot too shabby of a Wednesday. #2021-05-2700:21kennyIn all seriousness though, I think the typical release flow for libraries is to only release releases to the release repository. Anything else goes to the snapshot repo.#2021-05-2700:50Alex Miller (Clojure team)In general, I think it’s best to wait for an announcement that may cover anything to be aware of.#2021-05-2705:21onetomI made a proof-of concept for teaching the Datomic client API to understand java.time.Instant s:
https://clojureverse.org/t/teach-java-time-instant-to-datomic-cloud-transact-and-pull/7698
I would love to hear feedback on it!#2021-05-2705:49onetomjust realized i should have posted this to the datomic forum... should i "cross-post" it?#2021-05-2705:57onetomAdded a link to the existing java.time topic: https://forum.datomic.com/t/java-time/1406/4#2021-05-2717:38kennyfyi, created a new feature request for adding ASG metrics to the query group CW dashboard: https://ask.datomic.com/index.php/619/add-query-group-asg-metrics-to-query-group-cw-dashboard#2021-05-3018:37Joe LaneHey @kenny, thanks for the feature request.
There is a 3rd option, which is to add a new widget (to the dashboard we make for you) for whatever your needs are. Before an upgrade I might copy the dashboard to you have a backup in case we change things and it overwrites your changes.#2021-05-3019:01kennyOh, good point! I've edited the request to include this idea. I really don't like modifying resources that are "controlled", so I think I'd prefer to do option 1 or 2.#2021-05-3116:22eraadHi! I would appreciate your feedback on this issue (“too many files open” error) if anyone experienced it in the past:
https://forum.datomic.com/t/transactor-stops-responding-with-too-many-open-files-error/1863#2021-05-3120:40Joe Lane@eraad I would be suspicious of your dependencies. This is not related to dynamically constructing queries, but more likely a library that is "opening files" aka opening socket connections (such as outbound http requests). The quickest way to determine where sockets are being opened is likely to connect YourKit or Java Mission Control to a running JVM locally and perform the API requests to your local machine, watching for where / when sockets are being opened.#2021-05-3120:41eraad@lanejo01 Thanks, this is what I was looking for#2021-05-3123:39kennyWhen a Datomic query times out, does Datomic stop all work on that query?#2021-06-0100:05Joe LaneYou’ll have to be more specific about what you mean by “query times out”#2021-06-0116:08kennyIs this because there are different types of timeouts that can happen behind the scenes?#2021-06-0100:13kennye.g.,
(d/q {:query my-q
:args [db]
:timeout 1000})
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
Datomic Client Timeout#2021-06-0100:14kennyDatomic Client Timeout
at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3711)
clojure.lang.ExceptionInfo: Datomic Client Timeout #:cognitect.anomalies{:category :cognitect.anomalies/interrupted, :message "Datomic Client Timeout"}
at datomic.client.api.async$ares.invokeStatic(async.clj:58)
at datomic.client.api.async$ares.invoke(async.clj:54)
at datomic.client.api.sync$unchunk.invokeStatic(sync.clj:48)
at datomic.client.api.sync$unchunk.invoke(sync.clj:46)
at datomic.client.api.sync$eval64879$fn__64902.invoke(sync.clj:123)
at datomic.client.api.impl$fn__11642$G__11635__11649.invoke(impl.clj:41)
at datomic.client.api.impl$call_q.invokeStatic(impl.clj:150)
at datomic.client.api.impl$call_q.invoke(impl.clj:147)
at datomic.client.api$q.invokeStatic(api.clj:393)
at datomic.client.api$q.invoke(api.clj:365)#2021-06-0105:15plexusThe ask.datomic welcome email links to , which is a broken link. Seems it should be .com instead of .org.#2021-06-0112:47Alex Miller (Clojure team)Doh! Thanks.#2021-06-0223:07Paulo BardesHello there fellow clojurenauts! I’ve been diving into the datomic world, trying to learn the basics but one thing has been bothering me a bit. When running something like this:
(defn -main [& args]
(println "Doing stuff...")
(d/create-database client {:db-name db-name})
(d/connect client {:db-name db-name})
(println "Sutff done."))
Everything runs fine, both messages get printed pretty much as soon as the clojure runtime finishes loading, but… After that process just hangs in there for about a minute before exiting.
So far I’ve only tested it with a dev-local client. I’ve also noticed it won’t happen when running on a memory only system. So my guess would be that the JVM process is probably waiting on for some kind of timeout on a transactor thread or something like that.
So finally, my questions are: Am I on the right track here? Is there a way to signal datomic that all threads should quit immediately? Do I risk losing data if the process is killed during this timeout?#2021-06-0223:10Alex Miller (Clojure team)A 1 minute pause is a classic sign that there a future or agent has been used, and the Clojure runtime will have a background thread that takes 1 minute to timeout before the jvm will exit#2021-06-0223:11Alex Miller (Clojure team)(shutdown-agents) will work to allow the jvm to exit #2021-06-0223:12Alex Miller (Clojure team)There is no active work here, it’d just a background cached for reuse if needed - this is not special to Datomic #2021-06-0223:13Alex Miller (Clojure team)https://clojure.org/guides/faq#agent_shutdown#2021-06-0223:18Paulo BardesOhh, that totally makes sense! Thanks for the quick reply :)#2021-06-0223:21favila@bardes0022 datomic on prem has a shutdown function with an optional argument to shut down the agents#2021-06-0223:33Alex Miller (Clojure team)Oh, even better :)#2021-06-0223:33Alex Miller (Clojure team)It’s been a while :)#2021-06-0302:23Paulo Bardes@favila Thanks for the tip!#2021-06-0312:29prncHi 👋
In Datomic Cloud, what is the recommended way of getting a sorted result set from a query?
Two variations on the theme that I’m interested in would be sorted by transaction :db/txInstant or sorted by arbitrary attribute on the entity.#2021-06-0312:36favilaindex-pull if there is one :avet or cardinality-many :aevt index that matches your results and desired order#2021-06-0312:36favilaotherwise, just sort in the application#2021-06-0312:44prncI see, thanks!#2021-06-0316:39uwoI just want to double check, beyond :db/cas and transaction functions, which must be installed, there's no way to ensure a state before transacting, no?#2021-06-0316:47Joe Lane@U09QBCNBY What kind of guarantees are you looking for?#2021-06-0316:57uwoI want to ensure that no other transactions have touched the target datoms entity before committing the transaction. I would use cas, however I need to read one attribute and then set another, sadly.#2021-06-0316:58uwothis is an ad hoc thing, otherwise we would just install a transaction function that could throw if the constraints weren't matched#2021-06-0316:59Joe LaneI'm not sure why cas doesn't support what you want.#2021-06-0317:00Joe Lane{:tx-data [[:db/cas 42 :no/touchy 100 100] [:db/add 9000 :iff/no-touchy-cas "winning?"]]}#2021-06-0317:02uwoAH HA! I just leave the cas'd value the same -- I didn't realize that a single cas would cancel the entire transaction. That's great!!#2021-06-0317:02Joe LaneYay ACID 🙂#2021-06-0317:03uwoRight of course. Man I ask really embarrassing questions. thanks for entertaining them#2021-06-0317:03Joe LaneHaha, no way @U09QBCNBY I love your questions!!#2021-06-0317:04uwoThanks Joe!#2021-06-0618:46cjsauerI’ve just launched a brand new datomic cloud production topology stack of version 781-9041 and am working out a bunch of dep conflicts. One of which is particularly confusing to me. Running ion push shows me this:
:dependency-conflicts
{:deps
#:com.datomic{client-api #:mvn{:version "0.8.37"},
client #:mvn{:version "0.8.86"},
client-impl-shared #:mvn{:version "0.8.69"},
query-support #:mvn{:version "0.8.16"},
client-cloud #:mvn{:version "0.8.80"}}
These versions seems extremely old…when I pin them locally to test I receive exceptions like “d/qseq is not a function”, which was released all the way back in 668-8927#2021-06-0618:59cjsauerOh man….I just on a whim bumped ion-dev to 0.9.282 and suddenly the dep conflicts are much more minimal. I had followed the instructions to install ion-dev here on this page: https://docs.datomic.com/cloud/operation/howto.html#ion-dev#2021-06-0619:00cjsauerThat version is out of date. I suppose ion’s pinned versions are baked into the ion-dev artifact, which is why my conflicts were so old. Would be great if that page could be updated, and ideally kept in sync automatically to spare others the same dep chasing pain.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 6")}
#2021-06-1006:22onetomi was also fighting with those conflicts lately.
luckily, you don't HAVE TO necessarily fix all of them, but obviously that's an unsupported situation.
in my case, the main conflict was caused by the jackson lib and it actually prevented my program to start, even locally.
after applying these adjustments:
https://github.com/metosin/muuntaja/issues/121
and ignoring some of the remaining conflicts, things seem to work. so far...#2021-06-0711:02tatutI know bigdec scale should be consistent, but I find this is weird: I have a tx that has a scale only change
[109951162801971 360 6.880M true]
[109951162801971 360 6.88M false]
but I can’t reproduce it… If I try to do that again, I get “two conflicting datoms” error from transact#2021-06-0711:04tatutI only noticed because our custom backup/restore failed to restore because of the transaction exception. The version of datomic compute group hasn’t changed from when that first tx was created#2021-06-0714:36Linus Ericssonmentioned here: https://docs.datomic.com/on-prem/changes.html#1.0.6222#2021-06-0804:39tatutThe same is mentioned in datomic cloud, but I don’t see any more details on it… and why does it say “two conflicting datoms” when I try to recreate this situation#2021-06-0813:56cjsauerHello 👋
I’m having an issue with datomic’s http direct integration with api gateway in which the :uri key of the incoming request is always the root path "/" no matter what path I navigate to in the browser. I’ve tried seemingly every combination of “API mappings” in the AWS console, and still can’t seem to get the uri to flow through correctly. Has anyone hit this before?#2021-06-0813:57cjsauer“API mappings” as in custom domains, but the problem still persists even if I navigate directly to the apigw issued endpoint, side-stepping my custom domain altogether.#2021-06-0814:05cjsauerAh! I just figured it out. My proxy integration had forgotten to include the {proxy} portion of the NLB endpoint URL as described in the tutorial http://$(NLB URI):port/{proxy}
That fixed it!#2021-06-0814:05cjsauerduckie{:tag :div, :attrs {:class "message-reaction", :title "rocket"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🚀")} " 11")}
#2021-06-0917:03favilaIs this a bug? I expect _ not to unify across destructuring-binds, but it looks like it does! Using latest on-prem version.
(d/q '[:find ?a ?b
:where
[(ground [0 2]) [_ ?a]]
[(ground [1 2]) [_ ?b]]
])
=> #{} ; WAT?
(d/q '[:find ?a ?b
:where
[(ground [1 2]) [_ ?a]]
[(ground [1 2]) [_ ?b]]
])
=> #{[2 2]}
(d/q '[:find ?a ?b
:where
[(ground [[1 2]]) [[_ ?a]]]
[(ground [[1 2]]) [[_ ?b]]]
])
=> #{[2 2]}
(d/q '[:find ?a ?b
:where
[(ground [[0 2]]) [[_ ?a]]]
[(ground [[1 2]]) [[_ ?b]]]
])
=> #{} ; WAT?{:tag :div, :attrs {:class "message-reaction", :title "open_mouth"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😮")} " 3")}
#2021-06-1022:02favilaI filed a support ticket for this.#2021-06-1022:02favila#3163#2021-06-1006:52onetomis it possible to undivert a datomic system, which has been diverted by datomic.dev-local/divert-system,
or should i restart my repl, if i want to do another datomic.dev-local/import-cloud for example?#2021-06-1006:58onetomim also getting an error like this:
Execution error (IllegalArgumentException) at datomic.dev-local.impl/fn (impl.clj:490).
This client does not support dev-local import.
from import-cloud:
(dl/import-cloud
{:source {:server-type :ion,
:region "ap-southeast-1",
:system "XXX",
:endpoint "",
:proxy-port 8182,
:db-name "DBDBDB"}
:dest {:server-type :dev-local
:storage-dir :mem
:system "tmp"
:db-name "copy1"}
:filter {}})
i've tried both com.datomic/client-cloud v`0.8.102` and v`0.8.105`; same error.#2021-06-1007:20onetomah, it seems, i can't import into an in-memory dev-local system 😞
if i use (str (io/file (System/getProperty "user.home") ".datomic" "local")) instead of :mem, then the import succeeds.#2021-06-1007:34onetomis there any reason for not supporting "dev-local import" into in-memory systems?#2021-06-1007:32onetomin the https://docs.datomic.com/cloud/dev-local.html#import-cloud example, the source and the destination datomic :system names and :db-names are different.
do i understand well, that if i want to use dl/divert-system, then my imported copy should have the same :system and :db-name?
it feels very error-prone, because if i forget to call diver-system, i might end up modifying the cloud db, instead of the local copy and im not sure how would i even notice the mistake.
are there any videos / articles demonstrating this workflow?#2021-06-1022:03dvingoI'm curious if there was some thought in the past about adding a query planner to datomic (where the user-specified clause ordering is not meaningful), and if so why it was decided against - and if it hasn't, would there be consideration of adding it?#2021-06-1112:22jdkealyWhen you delete a database locally, doe it destroy all the data ?#2021-06-1416:42cjsauerBy locally do you mean dev-local? I think in that case calling d/delete-database does indeed destroy the data on disk. I use this during development to re-seed my local db while testing out schema changes. I have it on a hotkey.#2021-06-1223:12Drew VerleeI'm back to fiddlig with my datomic cloud solo instance, has anyone run into this cors issues on a solo toplogy? I cant think of a reasonable next step. https://forum.datomic.com/t/cors-issue/1870#2021-06-1223:20Drew VerleeI believe i'm dealing with a lambda, so https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-cors.html
"enabling cors support for lambda..." would be relevent. i created an option method, though i suppose i should test that theory.#2021-06-1223:22Drew Verleehttps://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-test-cors.html
Looks like it might be useful, ill try to get some feedback next.#2021-06-1223:30jarrodctaylorHave you read through https://docs.datomic.com/cloud/tech-notes/cors-lambda-proxy.html#2021-06-1223:39jarrodctaylorI see in your forum post you have seen the tech note. Since you can successfully curl the end point. When you do that locally do you receive the expected headers?#2021-06-1300:43Drew Verlee@U0508JRJC thanks a lot for the response.
It looks like i do get the expected headers from curl:
< HTTP/2 200
< date: Sun, 13 Jun 2021 00:39:24 GMT
< content-type: application/edn
< content-length: 847
< x-amzn-requestid: ef6a364c-55a7-430b-a2c0-8b2ede0e7bc5
< access-control-allow-origin: *
< access-control-allow-headers: Authorization, Content-Type
< x-amz-apigw-id: A1oFbEOxiYcFutA=
< access-control-allow-methods: GET, PUT, PATCH, POST, DELETE, OPTIONS
< x-amzn-trace-id: Root=1-60c553bc-5652c3b90fe79f8437318ff7;Sampled=0
<
full curl with -v (let me know if i need to pass more options)
➜ ion-starter git:(add-pomodoro-mode) curl -v -d :hat
* Trying 3.23.22.174:443...
* TCP_NODELAY set
* Connected to (3.23.22.174) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=*.
* start date: Aug 29 00:00:00 2020 GMT
* expire date: Sep 28 12:00:00 2021 GMT
* subjectAltName: host "" matched cert's "*."
* issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55ebe17c7e10)
> POST /dev/datomic/ HTTP/2
> Host:
> user-agent: curl/7.68.0
> accept: */*
> content-length: 4
> content-type: application/x-www-form-urlencoded
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
* We are completely uploaded and fine
< HTTP/2 200
< date: Sun, 13 Jun 2021 00:39:24 GMT
< content-type: application/edn
< content-length: 847
< x-amzn-requestid: ef6a364c-55a7-430b-a2c0-8b2ede0e7bc5
< access-control-allow-origin: *
< access-control-allow-headers: Authorization, Content-Type
< x-amz-apigw-id: A1oFbEOxiYcFutA=
< access-control-allow-methods: GET, PUT, PATCH, POST, DELETE, OPTIONS
< x-amzn-trace-id: Root=1-60c553bc-5652c3b90fe79f8437318ff7;Sampled=0
<
[[#:inv{:sku "SKU-51", :size :small, :color :yellow}]
[#:inv{:sku "SKU-47", :size :xlarge, :color :blue}]
[#:inv{:sku "SKU-39", :size :medium, :color :blue}]
[#:inv{:sku "SKU-19", :size :small, :color :green}]
[#:inv{:sku "SKU-55", :size :medium, :color :yellow}]
[#:inv{:sku "SKU-15", :size :xlarge, :color :red}]
[#:inv{:sku "SKU-35", :size :small, :color :blue}]
[#:inv{:sku "SKU-27", :size :large, :color :green}]
[#:inv{:sku "SKU-63", :size :xlarge, :color :yellow}]
[#:inv{:sku "SKU-3", :size :small, :color :red}]
[#:inv{:sku "SKU-43", :size :large, :color :blue}]
[#:inv{:sku "SKU-59", :size :large, :color :yellow}]
[#:inv{:sku "SKU-31", :size :xlarge, :color :green}]
[#:inv{:sku "SKU-7", :size :medium, :color :red}]
[#:inv{:sku "SKU-11", :size :large, :color :red}]
[#:inv{:sku "SKU-23", :size :medium, :color :green}]]
* Connection #0 to host left intact
Maybe my js/fetch call is off? the content-type? I don't see why that would be a cors issues.#2021-06-1301:20jarrodctaylorI do believe your issue is with the fetch call. You want to provide js as the argument. Perhaps something like
(js/fetch ""
(clj->js {:mode "cors"
:method "POST"
:headers {"Content-Type" "text/plain"}
:body ":hat"}))#2021-06-1301:20jarrodctaylor@U0DJ4T5U1 ^#2021-06-1301:25Drew VerleeYour clearly right that I would need to pass the map to clj to js.
I was using lambs island fetch and I forgot to change the args when I switched to pure JS.#2021-06-1301:44jarrodctaylorHappens to all of us 🙂#2021-06-1303:30pinkfrogI am using datomic client api and hence I have the (d/q foobar) function calls in my codebase. During testing, I want to mask out these functions and return faked ones. What’s the recommended approach to do that? Going with with-redefs or extending some protocol, if the latter, what protocol?#2021-06-1411:16Lennart BuitThe db object implements a protocol that supplies (the implementation of) d/q. What we have in our app is a wrapper that intercepts some of those protocol functions and (possibly) amends/replaces their implementation. For one, we have a wrapper for d/q logging queries and their results to a tap.#2021-06-1514:21pinkfrogHi. Can you be more concrete on a wrapper intercepting the protocol function?#2021-06-1514:21pinkfrogHow do you achieve this functionality?#2021-06-1517:13kschltzyou could achieve that via a defrecord implementing said protocol#2021-06-1517:51Lennart BuitYeah, we just have a deftype somewhere that forwards most calls to the ‘original’ db/`connection`/`client`, but amends their implementation:
(deftype MyWrappedDb [orig]
datomic.impl/Queryable
(q [_ arg-map] (println arg-map) (datomic.impl/q orig arg-map))
...)
That said, I’m not saying you should do this — I think its perhaps better to use dev-local to create a memory db for testing, but if you ever feel like you need to intercept calls on db values, this is a way to do so.#2021-06-1518:00Joe Lane@UGC0NEP4Y that protocol is an impl detail and subject to change. I'd like to go back to your original problem statement. What problem will you solve when you "mask out these functions and return faked ones."?#2021-06-1523:41pinkfrogI want to avoid connecting to real database for testing some functions that indirectly writes to db.#2021-06-1613:36Joe LaneOn-Prem or cloud?#2021-06-1614:12Joe LaneBoth https://docs.datomic.com/on-prem/peer/peer-getting-started.html and https://docs.datomic.com/cloud/dev-local.html have equivalent memory databases which you could create and tear down for every unit-test, no production-code modification required.#2021-06-1512:02gregHi, I'm trying to build a first app using datalog/datascript. In the db I want to store FX exchange rates. For each pair (eg. GBP/USD, GBP/EUR), for every day, one value. I'm wandering how to design schema for such application.
I'm struggling with what should be marked as an entity. Currency, currency pair, currency pair for given date, or all actually?
I'd much grateful for some ideas or sample schemas that you think might make sense. Thanks#2021-06-1513:28Joe LaneHi @U023TQF5FM3, are exchange rates directional? e.g. [#inst "2021-01-01" :GBP :USD 10] but [#inst "2021-01-01" :USD :GBP 8]?#2021-06-1514:45greg@U0CJ19XAM yes, this kind of situation is possible:
[#inst "2021-01-01" :GBP :USD 2]
[#inst "2021-01-01" :USD :GBP 0.4]
In addition there might be more then one source of rates, so there might be:
[#inst "2021-01-01" :GBP :USD [:name "BoE"] 2.01]
[#inst "2021-01-01" :GBP :USD [:name "HMRC"] 2.03]#2021-06-1517:53Joe LaneTo give better advice I'd need to know more about the rest of the app, but for those fx entities I'd lean towards representing :GBP->:USD as an entity with a composite tuple of :loc/from and :loc/to where the composite tuple represents identity.
Then for the rates i'd want to know what the access patterns will be and the growth of the dimensions.
The rate entity could be something like:
{:time/at #inst "2021-01-01
:rate/from-to {:loc/from :GBP :loc/to :USD}
:rate/source {:source/name "BoE}
:rate/amount 2.01}
That would then have a composite tuple of
[#inst "2021-01-01" 123 456 2.01]
Where 123 is the directed exchange and 456 is the rate source.
You can add additional composite tuples to allow different access patterns in exchange for space.
That being said, if the number of datoms stays small ( sub 1-billion) then who needs the extra tuples.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-06-1517:34jdkealyWhat's the best way to get two strings like this into an instant that i can save in datomic?
"2021-06-08T16:30:12" "America/New_York"
I tried using clj-time but no dice
(let [datetime-vec (take 8 (parse-timestamp vector date-str))]
(-> (apply zoned-date-time datetime-vec)
(with-zone UTC-offset)
(instant)))
I get
:db.error/wrong-type-for-attribute Value 2021-06-08T16:30:12-04:00[America/New_York] is not a valid :inst for attribute :session/start_time
`
even though the output of my function is an instant
Trying to insert
#inst "2021-06-08T20:30:12.000000000-00:00"
#2021-06-1517:41tvaughanPerhaps this https://forum.datomic.com/t/java-time/1406 is relevant?#2021-06-1517:43Joe Lane@U1DBQAAMB For now you gotta turn that into a j.u.Date before you persist it in Datomic.#2021-06-1517:44jdkealywhat's a j.u.date a google search is giving me linkes to jdate 🙂#2021-06-1517:47jdkealyi guess, Java Util Date ?#2021-06-1517:48jdkealy(java.util.Date. (tz/to-instant "2021-06-08T16:30:12" "America/New_York"))
#2021-06-1517:50jdkealyThis seems to work
([date-str, UTC-offset]
(let [datetime-vec (take 8 (parse-timestamp vector date-str))]
(-> (apply zoned-date-time datetime-vec)
(with-zone UTC-offset)
(instant)
inst-ms
(java.util.Date.))))#2021-06-1517:51jdkealyinput
"2021-06-08T16:30:12" "America/New_York"
output
#inst "2021-06-08T20:30:12.000-00:00"
looks legit 🙂{:tag :div, :attrs {:class "message-reaction", :title "guitar"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🎸")} " 2")}
#2021-06-1517:57Joe LaneFWIW, if you care about the TZ when you query obviously you'll need to stash that as another attribute and reconstitute a ZonedDateTime on the way out. Avoid querying with zones if you can, it can make simple queries slow and complicated.#2021-06-1518:27Joe LaneOh also, you can use (Date/from your-inst)#2021-06-1519:53jdkealycool thanks! Yes, I'm saving 3 attrs, the timezone, the original string, the converted instant (just in case i screw something up, i can regenerate them )#2021-06-1519:53jdkealyI'll only be querying for stuff in the immediate future, so i think I'll just be passing (> (java.util.Inst))#2021-06-1522:26gregWhen accessing raw index (listing datoms using ), is there a way of substituting numbers by entities itself? I was checking https://docs.datomic.com/cloud/query/raw-index-access.html, but there is nothing about such a thing.
Example:
when I preview datoms for an example data set
(d/datoms db-solar-system {:index :eavt})
I receive something like that:
...
#datom[74766790688845 73 "Sun" 13194139533319 true]
#datom[74766790688845 74 696000.0 13194139533319 true]
#datom[74766790688846 73 "Jupiter" 13194139533319 true]
#datom[74766790688846 74 69911.0 13194139533319 true]
#datom[74766790688847 73 "Saturn" 13194139533319 true]
#datom[74766790688847 74 58232.0 13194139533319 true]
...
And I would like to see attribute names instead of its number.
Is there some Datomic API to do that?#2021-06-1610:45thumbnailthe numbers are eids, so you can just pull them using d/pull.
You could query all attribute names, and use the query-result to map over the 4th entry to make it more efficient. depending your usecase.#2021-06-1618:44joshkhis there a way to trace which https://docs.datomic.com/cloud/query/query-data-reference.html#rules) were satisfied during a query execution? i'm working on a rule list that is sometimes flat, sometimes recursive, and i'd like to be able explain which conditions were met (or not) for auditing purposes#2021-06-1621:07favilaI typically add an extra constant binding to each rule#2021-06-1621:08favila[(ground :rule-impl-1) ?matched-rule-impl]#2021-06-1710:45joshkhhey that's a great idea, thanks @U09R86PA4#2021-06-1714:53vlaaadAnyone doing something like that for "dynamic" rules?
(defmacro eval-rules [form]
(walk/prewalk
(fn [e]
(if (and (seq? e) (= `unquote (first e)))
(eval (second e))
e))
form))
(eval-rules
'[[(reverse-edge ?from-concept ?type ?to-concept ?relation)
[(ground ~(set/map-invert some-config-map))
[[?type ?reverse-type]]]
[?relation :relation/concept-2 ?from-concept]
[?relation :relation/type ?reverse-type]
[?relation :relation/concept-1 ?to-concept]]])
I'm trying to create rules configured from some code...#2021-06-1715:06Joe Lane(let [some-config-map {:bar :foo :baz :bin}]
[['(reverse-edge ?from-concept ?type ?to-concept ?relation)
[(list 'ground (clojure.set/map-invert some-config-map))
'[[?type ?reverse-type]]]
'[?relation :relation/concept-2 ?from-concept]
'[?relation :relation/type ?reverse-type]
'[?relation :relation/concept-1 ?to-concept]]])
For more inspiration See the https://github.com/Datomic/mbrainz-sample/blob/master/src/clj/datomic/samples/mbrainz/rules.clj .#2021-06-1715:07potetmyeah “just quote the quoted bits” seems most straightforward to me#2021-06-1715:17vlaaadThat's a lot of quoting...#2021-06-1715:22Joe LaneIf you come up with something you REALLY like @U47G49KHQ I'd be interested in seeing it.#2021-06-1715:23vlaaadWell, I've been thinking about it for a couple is hours, and so far eval-rules is the best thing I have...#2021-06-1715:25vlaaadOther things I considered is what you suggested, macro that does the same thing as eval-rules, but that was much more cumbersome than fn version, and syntax quoting with a lot of ~'?type -like symbols#2021-06-1715:28vlaaadAh, I also tried #=, but I couldn't reference the config-map with it, it was interpreted as symbol#2021-06-1715:33Joe LaneHa, first rule of #= club, don't talk about #= club 😉#2021-06-1715:34vlaaadThis #= club membership didn't bear any fruits so far...#2021-06-1714:54vlaaad...and I don't want to pass this config map in addition to rules to queries since it's static. Any advice how to do that?#2021-06-1907:46joshkhcan i pass a rule name into a query as an argument without a macro?
(d/q '{:find [?player]
:in [$ % ?rule-name ?player-name]
:where [[?player :player/name ?player-name]
(?rule-name ?player)]}
db rules 'is-player "player1")
Execution error (IllegalArgumentException) at datomic.core.datalog/resolve-id (datalog.clj:330).
Cannot resolve key: is-player
#2021-06-2019:00Joe LaneUse clojure to construct the query as data
(defn query-players
[db rules player-name wants-is-player-rule?]
(->
(cond->
'{:find [?player]
:in [$ % ?player-name]
:where [[?player :player/name ?player-name]]}
wants-is-player-rule? (update :where conj '(?is-player ?player)))
(d/q db rules player-name)))
(query-players (d/db conn) the-rules "player1" true)#2021-06-1915:40kennyCan you rely upon the Datomic Cloud endpoint address always following the format: .<system>.<region>. ?#2021-06-2017:49Joe LaneNo, it’s an address, addresses can change. #2021-06-2019:03kennyOh interesting. When does it change at the moment? What’s the migration strategy to go from one format to another? #2021-06-2019:05Joe LaneAt the moment it doesn't, but you asked if you can rely on the endpoint address "ALWAYS" following that format.#2021-06-2020:29kennyI see. If it were to change, how could that be done safely?#2021-06-2020:47Joe LaneIt’s just a different string. What if the endpoint had a uuid in it?
I’m not sure what you mean by “safely”. #2021-06-2020:56kennyIf it were to change, client applications would need to know about which endpoint to point to. By safely I mean informing the client application which endpoint it should use before and after the switch. #2021-06-2022:00Joe Lane“Client applications” meaning not ions?#2021-06-2022:00kennyCorrect #2021-06-2022:01Joe LaneHow do they know what endpoint to hit right now?#2021-06-2022:11kennyStatically defined string at startup. Seems like a switch of that endpoint would be require application downtime.#2021-06-2022:45Joe LaneDoubtful. That statement is only true because you aren’t using a mechanism to dynamically update that endpoint and the datomic client using it.
#2021-06-2022:50Joe LaneImagine switching the query groups your client applications point to with zero downtime. How would you do it?#2021-06-2100:34kennyOh I see. You’re saying in the event Datomic changes it’s endpoint, we’d need to do an A B switchover by deploying an entirely new query group?#2021-06-2101:10Joe LaneThat in combination with either deploying a separate set of client applications pointed at the new QG or having your client applications being able to reset their clients and connections by polling for config values at a low rate (15 mins)#2021-06-2015:59joshkhthe other day i asked about how to audit which query rules are satisfied and favila had a nice suggestion to use ground to return some known value. that works when my "top level" rule returns a grounded value, however i'd like to also audit nested rules as well. any idea how i can aggregate some grounded values from each rule?
here is a non-working example that returns an empty result because i think the bound value of ?rule in the parent rule fails to unify on the different bound values in the nested rules.
(let [rules '[[(is-blue ?item ?rule)
[(ground :is-blue) ?rule]
[?item :item/color "blue"]]
[(is-in-stock ?item ?rule)
[(ground :is-in-stock) ?rule]
[?item :item/inStock? true]]
[(blue-items-in-stock ?item ?rule)
[(ground :blue-items-in-stock) ?rule]
(is-blue ?item ?rule)
(is-in-stock ?item ?rule)]]]
(d/q '{:find [?item (distinct ?rule)]
:in [$ %]
:where [(blue-items-in-stock ?item ?rule)]}
(d/db conn) rules))
=> []
ideally i would end up with something like this:
=> [[92358976734084 #{:blue-items-in-stock :is-blue :is-in-stock}]]#2021-06-2016:32refset> [?item :item/inStock? true ?rule]
is that 4th element intended?#2021-06-2016:51joshkhoops, that was just a typo in the example. thanks for pointing it out.{:tag :div, :attrs {:class "message-reaction", :title "ok_hand"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👌")} " 3")}
#2021-06-2018:39Joe Lane@joshkh it's your lucky day 🙂
tl;dr, use clojure.core/tap> and add however much tracing you want, whether it's just a single value or a map that you construct in your rule in order to capture the inputs.
The below snippet should be added to the siderail (with whatever filename you want) of https://github.com/Datomic/ion-starter .
;; This assumes you're using dev-local, and have dev-local as a dependency.
;; Edit resources/datomic/ion/starter/config.edn to match your system
(require
'[clojure.data.json :as json]
'[clojure.edn :as edn]
'[ :as io]
'[clojure.pprint :as pp]
'[datomic.client.api :as d]
'[datomic.dev-local :as dl]
'[datomic.ion.starter :as starter]
'[datomic.ion.starter.attributes :as attrs]
'[datomic.ion.starter.edn :as s-edn]
'[datomic.ion.starter.lambdas :as lambdas]
'[datomic.ion.starter.http :as http]
'[datomic.ion.starter.inventory :as inventory]
'[datomic.ion.starter.utils :as utils])
(if-let [r (io/resource "datomic/ion/starter/config.edn")]
(dl/divert-system (edn/read-string (slurp r)))
(throw (RuntimeException. "You need to add a resource datomic/ion/starter/config.edn with your connection config")))
;; test that config works
(def client (starter/get-client))
;; create database and load sample data:
(starter/ensure-sample-dataset)
(def conn (starter/get-connection))
@(def db (d/db conn))
;; Does tap work in queries?
(def rules
'[[(trace> [?tracer])
[(java.util.Date.) ?nt]
[(assoc ?tracer :at ?nt) ?t]
[(tap> ?t) _]]
[(by-type [?type] ?e)
;pre
[(hash-map :phase :pre :rule 'by-type :type ?type :e ?e) ?pre]
(trace> ?pre)
;rule
[?e :inv/type ?type]
;post
[(hash-map :phase :post :rule 'by-type :type ?type :e ?e) ?post]
(trace> ?post)]
[(by-size [?size] ?e)
;pre
[(hash-map :phase :pre :rule 'by-size :size ?size :e ?e) ?pre]
(trace> ?pre)
;rule
[?e :inv/size ?size]
;post
[(hash-map :phase :post :rule 'by-size :size ?size :e ?e) ?post]
(trace> ?post)]
[(by-type-and-size [?type ?size] ?e)
;pre
[(hash-map :phase :pre :rule 'by-type-and-size :type ?type :size ?size) ?pre]
(trace> ?pre)
;rule
(by-type ?type ?e)
(by-size ?size ?e)
; post
[(hash-map :phase :post :rule 'by-type-and-size :type ?type :size ?size :e ?e) ?post]
(trace> ?post)
]])
(defn get-items-by-type-and-size
"Returns pull maps describing all items matching type"
[db type size pull-expr]
(d/q '[:find (pull ?e pull-expr)
:in $ % ?type ?size pull-expr
:where
(by-type-and-size ?type ?size ?e)]
db rules type size pull-expr))
(get-items-by-type-and-size db :shirt :small '[:inv/sku :inv/color :inv/size])
Attached is a screenshot showing the output in REBL after executing get-items-by-type-and-size then browsing the tapped values as a collection of maps.{:tag :div, :attrs {:class "message-reaction", :title "cool"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🆒")} " 12")}
#2021-06-2117:49joshkhwell this is exactly what i was looking for. thank you @U0CJ19XAM!#2021-06-2117:50Joe LaneNP, I just wish I would have thought of it sooner 😂 . Definitely could have used this over the years.#2021-06-2018:39Joe Lane@joshkh it's your lucky day 🙂
tl;dr, use clojure.core/tap> and add however much tracing you want, whether it's just a single value or a map that you construct in your rule in order to capture the inputs.
The below snippet should be added to the siderail (with whatever filename you want) of https://github.com/Datomic/ion-starter .
;; This assumes you're using dev-local, and have dev-local as a dependency.
;; Edit resources/datomic/ion/starter/config.edn to match your system
(require
'[clojure.data.json :as json]
'[clojure.edn :as edn]
'[ :as io]
'[clojure.pprint :as pp]
'[datomic.client.api :as d]
'[datomic.dev-local :as dl]
'[datomic.ion.starter :as starter]
'[datomic.ion.starter.attributes :as attrs]
'[datomic.ion.starter.edn :as s-edn]
'[datomic.ion.starter.lambdas :as lambdas]
'[datomic.ion.starter.http :as http]
'[datomic.ion.starter.inventory :as inventory]
'[datomic.ion.starter.utils :as utils])
(if-let [r (io/resource "datomic/ion/starter/config.edn")]
(dl/divert-system (edn/read-string (slurp r)))
(throw (RuntimeException. "You need to add a resource datomic/ion/starter/config.edn with your connection config")))
;; test that config works
(def client (starter/get-client))
;; create database and load sample data:
(starter/ensure-sample-dataset)
(def conn (starter/get-connection))
@(def db (d/db conn))
;; Does tap work in queries?
(def rules
'[[(trace> [?tracer])
[(java.util.Date.) ?nt]
[(assoc ?tracer :at ?nt) ?t]
[(tap> ?t) _]]
[(by-type [?type] ?e)
;pre
[(hash-map :phase :pre :rule 'by-type :type ?type :e ?e) ?pre]
(trace> ?pre)
;rule
[?e :inv/type ?type]
;post
[(hash-map :phase :post :rule 'by-type :type ?type :e ?e) ?post]
(trace> ?post)]
[(by-size [?size] ?e)
;pre
[(hash-map :phase :pre :rule 'by-size :size ?size :e ?e) ?pre]
(trace> ?pre)
;rule
[?e :inv/size ?size]
;post
[(hash-map :phase :post :rule 'by-size :size ?size :e ?e) ?post]
(trace> ?post)]
[(by-type-and-size [?type ?size] ?e)
;pre
[(hash-map :phase :pre :rule 'by-type-and-size :type ?type :size ?size) ?pre]
(trace> ?pre)
;rule
(by-type ?type ?e)
(by-size ?size ?e)
; post
[(hash-map :phase :post :rule 'by-type-and-size :type ?type :size ?size :e ?e) ?post]
(trace> ?post)
]])
(defn get-items-by-type-and-size
"Returns pull maps describing all items matching type"
[db type size pull-expr]
(d/q '[:find (pull ?e pull-expr)
:in $ % ?type ?size pull-expr
:where
(by-type-and-size ?type ?size ?e)]
db rules type size pull-expr))
(get-items-by-type-and-size db :shirt :small '[:inv/sku :inv/color :inv/size])
Attached is a screenshot showing the output in REBL after executing get-items-by-type-and-size then browsing the tapped values as a collection of maps.{:tag :div, :attrs {:class "message-reaction", :title "cool"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🆒")} " 12")}
#2021-06-2119:23kennyThe d/client API docs have the following statement:
> Create a client for a Datomic system. This function does not communicate with a server and returns immediately.
However, running a call to d/client seems to indicate this function does communicate with a server in some regard.
(d/client {:server-type :ion
:system "my-system"
:endpoint ""
:region "us-west-2"})
Execution error (ExceptionInfo) at datomic.client.impl.cloud/get-s3-auth-path (cloud.clj:179).
Unable to connect to
Given that, should the d/client docstring be updated to mention it does do some server communication?#2021-06-2119:25kennyAlso, should that function expose a :timeout option?#2021-06-2211:35pithylessOn a test environment, we're seeing this kind of log from the Datomic Peer:
;; ~5 hours CI inactive (if this is relevant)
DEBUG 2021-06-18T13:32:37.702 [clojure-agent-send-off-pool-23]: {:event :kv-cluster/get-val, :val-key "60a4cc9e-ac11-40fe-9bd0-582f8209a403", :phase :begin, :pid 7, :tid 225}
;; 15 minutes pass....
DEBUG 2021-06-18T13:48:25.091 [clojure-agent-send-off-pool-23]: {:event :kv-cluster/get-val, :val-key "60a4cc9e-ac11-40fe-9bd0-582f8209a403", :msec 947000.0, :phase :end, :pid 7, :tid 225}
;; This next one finishes in 21ms:
DEBUG 2021-06-18T13:48:25.094 [clojure-agent-send-off-pool-23]: {:event :kv-cluster/get-val, :val-key "60128f6d-25fb-41f0-a6c2-9c5e73267da7", :phase :begin, :pid 7, :tid 225}
DEBUG 2021-06-18T13:48:25.115 [clojure-agent-send-off-pool-23]: {:event :kv-cluster/get-val, :val-key "60128f6d-25fb-41f0-a6c2-9c5e73267da7", :msec 21.0, :phase :end, :pid 7, :tid 225}
;; and now we start processing the transacts that queued up when the tests started...
INFO 2021-06-18T13:48:25.117 [manifold-execute-43]: {:event :peer/transact, :uuid #uuid "60cca429-7908-49b1-82c8-56b50effb4ce", :phase :start, :pid 7, :tid 282}
DEBUG 2021-06-18T13:48:25.234 [clojure-agent-send-off-pool-24]: {:event :peer/accept-new, :id #uuid "60cca429-7908-49b1-82c8-56b50effb4ce", :phase :begin, :pid 7, :tid 226}
DEBUG 2021-06-18T13:48:25.234 [clojure-agent-send-off-pool-24]: {:event :peer/accept-new, :id #uuid "60cca429-7908-49b1-82c8-56b50effb4ce", :msec 0.462, :phase :end, :pid 7, :tid 226}
INFO 2021-06-18T13:48:25.235 [clojure-agent-send-off-pool-24]: {:event :peer/transact, :uuid #uuid "60cca429-7908-49b1-82c8-56b50effb4ce", :phase :end, :pid 7, :tid 226}
After those 15 minutes of waiting, the peer moved on and transacted all the transactions that happened to queue in the meantime, as if nothing happened. There do not seem to be any interesting or anomalous logs in the Transactor for this time period.
Datomic Peer (1.0.6269) with Postgres storage. The peer and transactor are both running on Kubernetes, but Postgres is hosted outside of the k8s cluster. Any idea what could be going on with the :kv-cluster/get-val and how to go about debugging this further? Is there some timeout we can configure to avoid this kind of situation in a production environment?#2021-06-2309:23pithylessI moved this question to the forum. If anyone has some insights or comments about running Datomic On-Prem with k8s, I'd really appreciate it (either on the forum or in this thread). 🙏
https://ask.datomic.com/index.php/631/blocking-event-cluster-without-timeout-failover-semantics#2021-06-2220:30FabimHey, I just subscribed to Datomic Ions Solo for my pedestal project. On my fist ion-dev push I get the following error. Any suggestions of what I’m doing wrong? my deps.edn has io.pedestal/pedestal.jetty {:mvn/version "0.5.9"} and no jetty-util#2021-06-2220:34Joe Lane@U010L3S1XHS You're not going to believe me, but I believe you have a corrupt, partially downloaded jetty-util jar in your local ~/.m2 directory. Delete the jetty-util jar (NOT your entire m2 directory) and then try to push again?#2021-06-2221:11Fabim@U0CJ19XAM Thanks for your quick answer. I deleted it and now I get Syntax error (ClassNotFoundException) compiling at (cognitect/http_client.clj:1:1).
org.eclipse.jetty.client.HttpClient when using datomic solo up. how do you recommend I reinstall jetty in m2?#2021-06-2221:13Joe LaneDelete the .cpcache in your project directory #2021-06-2221:13Joe LaneBeyond that I’d need to see your deps edn #2021-06-2221:16FabimI deleted .cpcache. but got the same syntax error#2021-06-2313:03Fabim@U0CJ19XAM The deploy worked. Thanks for the tip. Reseting jetty in m2 solved it.#2021-06-2313:12Joe LaneGreat to hear! Always happy to chat#2021-06-2313:13Fabim@U0CJ19XAM I deployed and got a `.datomic-ions/` folder. Can that folder be put into `.gitignore` , or does it need to be pushed with git?#2021-06-2313:23Joe LaneIt can be ignored.#2021-06-2313:37Fabim@U0CJ19XAM Thanks.
When deploying I got a lot of dependency-conflicts. Is there a way to update the depencencies running in my datomic cloud or do I need to explicit use the old dependency versions in my deps.edn to get rid of that warning?#2021-06-2313:37Joe LaneThe latter.#2021-06-2313:38Joe LaneIf you hit a conflict that you can't work around, contact support.#2021-06-2314:07Fabim@U0CJ19XAM I am responding to a GET request on the API gateway mapped on a lambda with ring.util.response/resource-response to deliver the index.html but the css and js are blocked by the browser with Content Security Policy: The page's settings blocked the loading of a resource Is there a way to deliver a website with (pedestal)ion without errors?#2021-06-2319:13FabimI allowed some origins, as the forum suggested. Now I’m stuck with subfolders not being loaded. Happy to hear how you solved that https://clojurians.slack.com/archives/C03RZMDSH/p1624475471307800#2021-06-2320:03Joe LaneAre you running a local jetty server for development?#2021-06-2320:07Fabimyes#2021-06-2320:08Joe LaneAnd presumably the subfolders work with the same service-map?#2021-06-2320:08Fabimduring development I use integrant#2021-06-2320:08Fabimthe service map is different in development#2021-06-2320:09Joe LaneCan you diff them?#2021-06-2320:11Fabimthis is the difference
(-> service
(dissoc ::http/chain-provider)
(assoc
::http/join? false
::http/routes #(route/expand-routes (deref #'routes))
::http/secure-headers {:content-security-policy-settings
{:default-src "'self'"
:style-src "'self' 'unsafe-inline'"
:script-src "'self' 'unsafe-inline'"}})
(http/default-interceptors)
(http/dev-interceptors)
(http/create-server))#2021-06-2320:15Fabimshould be the setup they do in peodestal-ions-sample#2021-06-2413:50Fabim@U0CJ19XAM I have an idea what the problem was. thanks for your help{:tag :div, :attrs {:class "message-reaction", :title "100"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💯")} " 2")}
#2021-06-2309:23pithylessI moved this question to the forum. If anyone has some insights or comments about running Datomic On-Prem with k8s, I'd really appreciate it (either on the forum or in this thread). 🙏
https://ask.datomic.com/index.php/631/blocking-event-cluster-without-timeout-failover-semantics#2021-06-2309:08babardo👋 Using datomic cloud, I set an identity on a tuple composed of 2 attributes:
[:resource/a+b :tuple :one :unique :identity :attrs [:resource/a :resource/b]]
And I wonder if it's possible to activate the identity only if both of the attributes are present in the entity.
(and not only one of them like it seems to be)#2021-06-2312:19favilaIt is not possible. A composite tuple is asserted if any component attr is asserted and there’s no way to only assert the composite if all of them are asserted.#2021-06-2312:20favila(That would be a really nice feature btw.)#2021-06-2313:12babardoOk no 🪄 so! Thanks for the answer 🙏#2021-06-2319:11FabimIs there a way to expose resources in subfolder of `public` so the browser can fetch them when getting the `index.html` in a pedestal ion service?
I set :allowed-origins which enables to load js and css in the public folder.
The images, js and css in public/[subfolder]/ produce 403. I’m using an APIGW connected to a lambda that calls the ring routing as described in https://github.com/pedestal/pedestal-ions-sample#2021-06-2408:30heliosclojure hive-mind, i want you opinion on something. Assume I have a schema.clj which contains my datomic schema (the usual). Every time I start the application (in development and in production) this schema is transacted so whatever has been added gets correctly transacted too. Now, some time ago I added a new attribute {:db/ident :foo/bar ...} , now after a few weeks turns out that I want to rename this attribute like :alice/bob . Following the documentation of datomic i'm supposed to {:db/id :foo/bar :db/ident :alice/bob} , but that clearly doesn't work in development as :foo/bar isn't yet defined when i start my system (it's in the same transaction) but would work on a running system with the attribute already installed. How do you handle these cases?#2021-06-2408:48tatutwe have a schema.edn that contains migrations, each migration has it's own :db/ident#2021-06-2408:49tatutthe startup code runs only new migrations whose ident isn't in the db yet#2021-06-2408:49tatutso migrations is just a list of txs to run in order... or a fully qualified symbol denoting a function to call (connection given as argument)#2021-06-2408:51tatutseparate txs help with that and I like that we can see the schema evolution from the schema file as well#2021-06-2414:21tvaughanWe do the same. This:
[{:db/ident :tx/id
:db/cardinality :db.cardinality/one
:db/valueType :db.type/keyword
:db/unique :db.unique/value}
{:db/ident :tx/status
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :tx.status/applied}]
is always transacted on startup. Everything else looks like:
{:tx-id :migration-0001
:tx-data
[{:db/ident :editor-session/pid
:db/cardinality :db.cardinality/one
:db/valueType :db.type/string
:db/unique :db.unique/value}]}
Or:
{:tx-id :migration-0002
:tx-kind :fn
:tx-data
server.db.migrations/somefn}
These are kept as resources which are transacted by:
(defn tx-resource!
[conn resource]
(tx! conn (resources/read-resource resource)))
(defn- tx-status
[conn tx-id]
(-> (conn->db conn)
(q-by-ident [:tx/id tx-id] [{:tx/status [:db/ident]}])
:tx/status
:db/ident))
(defn- tx-apply!
[conn {:keys [tx-id tx-data]}]
(tx! conn (conj tx-data {:tx/id tx-id :tx/status
:tx.status/applied})))
(defn- tx-applied?
[conn tx-id]
(case (tx-status conn tx-id)
:tx.status/applied true
nil))
(defn tx-idempotent!
[conn resource]
(let [{:keys [tx-id tx-data tx-kind] :as props} (resources/read-resource resource)]
(when-not (tx-applied? conn tx-id)
(case tx-kind
:fn (do
(require (symbol (namespace tx-data)))
(tx-apply! conn (assoc props :tx-data ((resolve tx-data) conn))))
(tx-apply! conn props)))))#2021-06-2414:29heliosthank you for your advice 🙂#2021-06-2414:31tvaughanI wrote most of this before I discovered https://github.com/magnetcoop/stork which is pretty similar. I borrowed its approach to supporting migration functions#2021-06-2812:38Aleh AtsmanHey, what's the best way to construct query that returns all items with specified ids.
Given I have list of ids, let's say list of user/id, how do I get all entities corresponding to these ids?{:tag :div, :attrs {:class "message-reaction", :title "sos"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🆘")} " 3")}
#2021-06-2813:43RollACasterI’m not sure if it’s the best way to go since I’m new to Datomic, but that’s how I would do it:
(d/q
'[:find ?e
:in $ [?id ...]
:where
[?e :user/id ?id]]
db
["id-1" "id-2"])
{:tag :div, :attrs {:class "message-reaction", :title "heart"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("❤️")} " 3")}
#2021-06-2815:26Aleh Atsman@UB5T2688Y thanks! that's exactly what I was looking for!#2021-06-2914:14robert-stuttafordwhat could cause ex-data to return nil when called on a transaction exception? when i print the caught value, i can see the data i want from ex-data inside the printed output?
@(d/transact (datomic/conn) [[:db/add "temp" :user/email "
{:tag :div, :attrs {:class "message-reaction", :title "white_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✅")} " 2")}
#2021-06-2914:23favilaI’m guessing (-> *e ex-cause ex-data)#2021-06-2914:23favilathe actual outer exception will be some kind of execution exception IIRC#2021-06-2914:29robert-stuttafordgoodness thank you#2021-06-2914:57ghadiya elided the useful information @U0509NKGK
the :via vector has the chain#2021-06-2915:14robert-stuttaford@U050ECB92, thanks, you're totally right. not used to working with exceptions much, can you tell 🙂#2021-06-2914:15robert-stuttafordwhat's the correct way to read that :data key?#2021-06-2914:16robert-stuttaford1.0.6269 peer#2021-06-2915:01zendevil.ethI am trying to create a local datomic db.
I am running a local transactor, but when I try to make a database with the uri of the transactor, I get an error:
(def db-uri "datomic:")
(d/create-database db-uri)
:db.error/unsupported-protocol Unsupported protocol :dev#2021-06-2915:10favilaAre you using the “free” version?#2021-06-2915:11favilaif so, the protocol is datomic:free://...#2021-06-2915:12favilakeep in mind also that free is quite far behind, many features you see in documentation may be absent#2021-06-2915:32zendevil.ethI’m using 1.0.629#2021-06-2915:33zendevil.eth“Pro Starter”#2021-06-2915:49Joe LaneDo you actually have a newline in the url?#2021-06-2918:29zendevil.ethactually :free instead of :dev works#2021-06-2918:32zendevil.ethI’m trying to get a transaction from a pull request immediately after creating the record by calling the test-db function:
(defn add-user [user]
(d/transact conn
[{:tx-data
[(assoc user :user/join-timestamp (.getTime (java.util.Date.)))]}]))
(defn get-user [id-string]
(d/pull (get-db) '[*] [:user/id-string id-string]))
(defn test-db [req]
(db/add-user {:user/id-string "foo"
:user/google-id "df"
:user/given-name "sdf"
:user/family-name "sdf"
:user/photo "sdf"
:user/email "sdf"})
(r/response (db/get-user "foo")))
But it gives:
datomic.impl.Exceptions$IllegalArgumentExceptionInfo at /test
:db.error/not-an-entity Unable to resolve entity: :user/id-string
(defn get-user [id-string]
(d/pull (get-db) '[*] [:user/id-string id-string])) ;; <- on this line
I don’t know why it says unable to resolve entity :user/id-string#2021-06-2919:50favilad/transact is invalid#2021-06-2919:50favilaalso this pattern in general is bad. d/transact returns a future which you should dereference--it would have thrown an exception. It also returns (on success) the db-after#2021-06-2919:51favilainstead of treating the db like an ambient stateful resource`(get-db)` , treat it as a value and pass it along#2021-06-2919:53favilaI think you want just @(d/transact conn [user]) to start with#2021-06-2919:53favilahttps://docs.datomic.com/on-prem/best-practices.html#2021-06-2920:10zendevil.ethI have:
(defn add-user [user]
@(d/transact conn
[(assoc user :user/join-timestamp (.getTime (java.util.Date.)))]))
but same problem.
Caused by: java.lang.IllegalArgumentException: :db.error/not-an-entity Unable to resolve entity: :user/id-string#2021-06-2920:11favilaYou don’t include a tempid. Are you treating :user/id-string as a :db/unique :db.unique/identity type but it actually isn’t?#2021-06-2920:12favilaalternatively just include a tempid, e.g. :db/id “my-user”#2021-06-3008:37chrisblomi'd also let get-user take the db as an argument, you can then get the db with the changes from the transaction from the :db-after field
(let [{:keys [db-after]} (db/add-user ...)]
(r/response (db/get-user db-after "foo"))
#2021-06-3010:35Leah NeukirchenIs it possible to override the S3 endpoint for datomic backup to use it with google cloud storage?#2021-06-3016:55favilaIt may be possible if you can do it via some magic amazon s3 client override configuration. The last time I looked into this a few years ago there didn’t seem to be a way.#2021-06-3016:55favilaI think cognitect itself could do it fairly easily when they configure the s3 client in their code; but the trick for us is to do it without changing that code#2021-06-3016:56favilaI guess with enough reverse engineering, reflection, and monkey-patching anything is possible#2021-07-0106:47Leah NeukirchenYeah it needs a withEndpointConfiguration call to the builder... I'm not sure why they don't offer it, there are many other S3 providers. 😞#2021-06-3016:02jacekschaeFor anyone here wondering about some resources to learn Datomic I have been working hard to put this together — there is still long way to go but definitely a good moment to share https://learndatomic.com{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 6")}
#2021-07-0107:01pedrorgirardiThat’s great @U8A5NMMGD! I’m keen to check it out once it’s available.{:tag :div, :attrs {:class "message-reaction", :title "heart"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("❤️")} " 3")}
#2021-06-3016:09joshkhwhat might be a cause of the following error?
(dl/import-cloud {:source {:server-type :ion
:region "the-region"
:system "the-system"
:query-group "the-query-group"
:db-name "the-db"
:endpoint ""
:proxy-port 8182}
:dest {:system "testing"
:server-type :dev-local
:db-name "take-me-to-your-leader"}})
Execution error (IllegalArgumentException) at datomic.dev-local.impl/fn (impl.clj:490) .
This client does not support dev-local import.
that's with the latest com.datomic/dev-local {:mvn/version "0.9.232"} installed locally. strangely, other people on the team who are loading the same project and running the same function are not having issues#2021-07-0107:33Mikko KoskiI've understood that composite tuples can't be deregistered, is that right? I mean, if I add a composite tuple a+b, but later notice that I need to add a new tuple (or well, triple) a+b+c and the tuple a+b is becomes useless, Datomic still keeps updating the a+b tuple and it can't be deregistered.
I assume there's some performance penalty to have unused composite tuples. Because of this, we have avoided using composite tuples if we have a doubt that we might need to change it in the future. Is that a valid reason to avoid them? Or am I overestimating the performance penalty of updating composite tuples?#2021-07-0111:18favilaThe performance penalty is a la carte—when you add a composite tuple it’s up to you to “touch” all entities that don’t have it yet to populate it{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-07-0111:21favilaThe reason you cant change it probably just comes down to the inherent complexity of schema changes in a temporal db (what happens to the history view? To all old TX records?) and the philosophical stance Rich has against making the same name mean something different over time. His view: just use a new name and deprecate / remove the old one.#2021-07-0111:24favilaNotice that no schema changes which change type are allowed—tuples are not unique in that way#2021-07-0111:28favilaI think you may be thinking of composite tuples as a pure ephemeral projection of "real” data like an index in a relational db. That’s not really how it’s implemented in datomic—it’s more like actual data that the transactor automatically updates when it notices it’s components change#2021-07-0111:29favilaIt doesn’t eagerly project it, it can’t repopulate it for you, and the data is in the same covering indexes as all other data #2021-07-0111:29Mikko KoskiThanks for the answer! But I'm still wondering, isn't there performance penalty in the "just use a new and and deprecate the old" strategy? I mean, if I have attributes :a, :b and :c, and a composite tuple a+b, which I then decide to deprecate in favor of a new composite tuple a+b+c, then whenever I'm changing the attribute :a or :b, Datomic will update the composite tuple a+b, even though it's deprecated.#2021-07-0111:31Mikko Koski> I think you may be thinking of composite tuples as a pure ephemeral projection of "real” data like an index in a relational db. That’s not really how it’s implemented in datomic—it’s more like actual data that the transactor automatically updates when it notices it’s components change
Right... so it's not really a performance penalty, but penalty in storage?#2021-07-0111:34favilaYes#2021-07-0111:35favilaWhich you can mitigate by eg adding noHistory to the attr and removing any value indexes if you have it#2021-07-0111:36favilaIf you really want it gone you need to create new component attrs also{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-07-0112:45BoyanWould you say then @U09R86PA4 that the storage cost of old unnecessary composite tuples shouldn't really be much of a factor in deciding between composites vs the other types of tuples, when a composite would otherwise work?#2021-07-0112:47favilaI would say that it’s rare that storage cost is a factor#2021-07-0112:48favilaI also wish you could “turn off” a composite tuple--i.e. signal to the transaction processor that it should stop updating it#2021-07-0112:50favilacomposite tuples do something no other tuple can do: they know the effective value of the db at the moment right before committing the transaction datoms, so they can update composites to their correct value within that transaction even if the contents of the tx-data was uncoordinated{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-07-0112:51BoyanGot it! Yeah, I wish we could. Maybe that will come as a feature one day. It sounds like the type of schema change that could be allowed.#2021-07-0112:51favilayou can use a “normal” tuple and update it yourself, but you will have to be careful that you only prepare transaction data where you know what the final value will be when the tx-data arrives at the transactor, and that nothing else in the tx-data might alter that calculation.#2021-07-0112:52favilabut if storage cost is a concern, that’s what you gotta do#2021-07-0112:52favilaIt’s not impossible--datomic didn’t have tuples of any kind for years. we were manually maintaining composite indexes as serialized strings#2021-07-0112:54BoyanStorage isn't really that big of a concern in our case, I think. It was more like the bad aftertaste of having unused and unnecessary attributes getting asserted perpetually.#2021-07-0112:55faviladatomic doesn’t let you remove the cognitive burden of past mistakes. I think that’s the unspoken downside to the “no breaking changes” mantra{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 6")}
#2021-07-0112:56BoyanYeah, though there's a difference between old/deprecated attributes in the schema and having values for them asserted on entities.#2021-07-0113:01favilabecause of the history db and tx log, there’s always an assertion somewhere#2021-07-0112:58joshkhmy understanding is that the lambda produced when deploying an Ion is really just a proxy to code running on the compute or query group. does that mean that memory allocated to the lambda via the lambda configuration is less consequential than a typical lambda?#2021-07-0116:13Joe LaneYes!#2021-07-0120:38joshkhthanks Joe! does the code that is proxied to run in its own memory space? in other words, if my long running http-direct process has some state, say via mount, then there's no reason to expect that the proxied-to function can access that state, right?#2021-07-0120:39joshkh(we tested this for fun and ruled it out, but i thought i'd ask anyway)#2021-07-0201:09Joe LaneThere is no distinct “memory space” so your ion should be able to access any state that is correctly instantiated. Http-direct and lambda proxy’s both call ions. As long as the state is present on all cluster nodes you should be able to access it. however mount can add it’s own challenges due to piggy backing on ns loading. I can usually get by without a framework and instead just wrap my state in a delay and initializing it on the first access (what delays are for). #2021-07-0209:03joshkhvery interesting, thanks again#2021-07-0120:53luiseugenioHi. Is there a Datomic Connector (Source and Sink) for Kafka?#2021-07-0219:31refsetHi 🙂 in lieu of a more relevant response from someone else, you may be able to borrow and adapt some code from Crux https://github.com/juxt/crux/tree/master/crux-kafka-connect{:tag :div, :attrs {:class "message-reaction", :title "white_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✅")} " 2")}
#2021-07-0309:18luiseugenioHi, that’s the plan 🙂 (if “someone else” doesn’t show up) haha#2021-07-0414:04refsetAwesome 😄#2021-07-0204:20pedrorgirardiI’m sure I’m missing something, but does anyone know what might be causing this? (It’s fine locally, but it fails on a EC2 instance)#2021-07-0501:13pedrorgirardiSomeone already asked this, so sharing it here since it might be helpful to others: https://ask.datomic.com/index.php/546/could-not-find-artifact-com-datomic-ion-jar-0-9-48-in-central#2021-07-0502:14pedrorgirardihttps://clojure.org/reference/deps_and_cli#_maven_s3_repos#2021-07-0204:20pedrorgirardi#2021-07-0204:22pedrorgirardiI added this EC2 to the same VPC as Datomic Cloud - I’m not sure this is the way to go, I’m trying to figure things out.#2021-07-0204:23pedrorgirardiI added an SSH inbound rule, but that’s it, I didn’t mess with any other configuration.#2021-07-0204:25pedrorgirardi(Clojure CLI version 1.10.3.855)#2021-07-0215:55zendevil.ethI create entities like so and put some values in those entities. So far there are no errors:
(d/transact conn [{:db/ident :carecoach/name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity
:db/doc "name"}
{:db/ident :carecoach/number
:db/valueType :db.type/long
:db/cardinality :db.cardinality/many
:db/unique :db.unique/identity
:db/doc "number"}
])
(d/transact conn [{:carecoach/name "Prikshet" :carecoach/number 20}])
(d/transact conn [{:carecoach/name "Deepak" :carecoach/number 10}])
(d/transact conn [{:carecoach/name "Prikshet" :carecoach/number 30}])
(d/transact conn [{:carecoach/name "Deepak" :carecoach/number 20}])
(d/transact conn [{:carecoach/name "Prikshet" :carecoach/number 40}])
(d/transact conn [{:carecoach/name "Deepak" :carecoach/number 50}])
But when I try to do a pull, I get the following error:
(defn get-sum [name-string]
(d/pull db '[*] [:carecoach/name name-string])
#_(d/q '[:find ?number
:in $ ?name-string
:where
[?e :carecoach/name ?name-string]
[?e :carecoach/number ?number]
]
db name-string)
)
(get-sum "Deepak")
:db.error/not-an-entity Unable to resolve entity: :carecoach/name
{:entity :carecoach/name, :db/error :db.error/not-an-entity}
error.clj: 57 datomic.error/arg
error.clj: 52 datomic.error/arg
db.clj: 589 datomic.db/require-id
db.clj: -1 datomic.db/require-id
db.clj: 689 datomic.db/require-attrid
db.clj: 686 datomic.db/require-attrid
db.clj: 534 datomic.db/resolve-lookup-ref
db.clj: 526 datomic.db/resolve-lookup-ref
db.clj: 568 datomic.db/extended-resolve-id
db.clj: 564 datomic.db/extended-resolve-id
db.clj: 579 datomic.db/resolve-id
db.clj: 572 datomic.db/resolve-id
How to fix this?#2021-07-0215:58faviladb is from before your transactions#2021-07-0215:59favilaagain: you should use the return value of transact, and you should pass db in to querying functions as an argument#2021-07-0215:59faviladb is not a “database handle” like in a normal relational db. You can’t def it once. It’s an immutable value.#2021-07-0215:59favilatransactions change that value and return a new db#2021-07-0216:00favilayou can even use d/with to produce a new db value without committing it to storage.#2021-07-0216:02favilahttps://docs.datomic.com/on-prem/best-practices.html#consistent-db-value-for-unit-of-work#2021-07-0216:26zendevil.ethwhat’s the fastest way to fix this? I’m in a hurry#2021-07-0217:18pyryAs favila said, just change get-sum to take db as an argument. Pass in the the db-after you get from eg. derefing the return value of the last d/transact#2021-07-0609:24zendevil.ethI have created a peer connection with my local datomic server like so:
(def db-uri "datomic:")
(def conn
"Get shared connection."
(d/connect db-uri))
I want to create this connection on datomic cloud aws without changing the code so that the deployment has the right connection string based on whether it’s in dev or prod. Is there a connection string that I can get for datomic cloud?
Ideally I want three uri’s: one for local development, one on cloud for staging and one for production.#2021-07-0610:22heliosIn this page (https://docs.datomic.com/on-prem/schema/identity.html#idents) there is this explicit quote:
> Idents should not be used as unique names or ids on ordinary domain entities. Such entity names should be implemented with a domain-specific attribute that is a unique identity.
Can anyone help me understand why? I was thinking of having a single top-level entity defined as {:db/ident :system/settings} , so that i can do (datomic/entity db :system/settings) but it seems discouraged. Why?#2021-07-0611:22favila1) different things are different 2a) idents are in a special cache/projection that is always fully realized in memory on all db objects so that attribute is resolution is really fast. Because of this you don’t want too many idents. 2b) this projection is a-historical: it doesn’t care about retractions only assertions. This is so renamed attrs can still be looked up under the old name. So this is yet another reason you shouldn’t use idents: they still resolve after retraction, which is probably not what you want for a domain entity{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 4")}
#2021-07-0611:23favilaBottom line: idents are designed for the special needs of attribute names and lookups, not for domain entities#2021-07-0613:14daveliepmannThanks, @U09R86PA4#2021-07-0610:27mkvlrI can think of two reasons: maybe because idents are entities and it would lead to the peer keeping the complete entity in memory? Or to not complect domain entities with them?#2021-07-0610:28mkvlrbut also curious to learn what the correct answer is 😼#2021-07-0611:19zendevil.ethI’m doing lein uberjar, and it’s just stuck on compiling. How do I fix this?#2021-07-0614:35indyThe usual case is that your app probably has some def that spawns something like a http client. A thread that is never terminated.#2021-07-0711:04souenzzoyou are doing a top-level (def conn (d/connect ...)) or (def client (d/client ...))
Probably you will need to use (def *conn (delay (d/connect ...)))#2021-07-0711:07souenzzo#client
Why this query returns nothing
(count (d/q
'[:find (pull ?e [*])
:where
[?e _ #uuid"ca92f40a-998b-5fef-96c0-7a3074c55ab1"]]
db))
=> 0
and this query find the entity?
(count (d/q
'[:find (pull ?e [*])
:where
[?a :db/ident]
[?e ?a #uuid"ca92f40a-998b-5fef-96c0-7a3074c55ab1"]]
db))
=> 1
This behavior feels wrong to me#2021-07-0714:06thumbnailThere's no index for EVAT, only EAVT, AEVT AVET and VAET#2021-07-0714:11souenzzoSo it should throw an exception, right?!
I remember to get "full database scan" in peer api#2021-07-0714:28thumbnailYeah; I wondered why this doesn't throw a full database scan error either#2021-07-0714:31souenzzohttps://ask.datomic.com/index.php/644/datomic-returns-results-insufficient-bindings-exception{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-07-0714:48thumbnailThe client api will throw "full database scan" errors too btw; just not in this example.#2021-07-0714:53souenzzodatomic client is way harder to create minimal examples 😞#2021-07-0717:16Joe LaneWhy is it harder @souenzzo?#2021-07-0716:37souenzzowhere order matter:
In "devtime", I was trying to find "all attributes that point into an entity"
(d/q '[:find ?ident
:where
[?a :db/ident ?ident]
[?x ?a ?e]
[?e :user/id]]
db)
Execution error (IllegalArgumentException) at datomic.core.datalog/resolve-id (datalog.clj:330).
Cannot resolve key: Sun Jun 13 05:00:44 BRT 2021
(d/q '[:find ?ident
:where
[?a :db/ident ?ident]
[?e :user/id]
[?x ?a ?e]]
db)
=> [[:a] [:b] [:c]]
Is it a know issue? Should I report at http://ask.datomic.com?#2021-07-0717:52ghadi@souenzzo it's not clear what you are trying to do#2021-07-0717:54ghadiare you trying to figure out which attributes are refs? because ref attributes point to entities
[?attr :db/ident ?ident]
[?attr :db/valueType :db.valueType/ref]
or are you asking about which attribute entities have refs pointing to other entities?#2021-07-0717:54ghadi(because attributes are defined in datomic as ordinary entities, and they can have arbitrary facts asserted about those entities)#2021-07-0717:59souenzzoMy original problem was "or are you asking about which attribute entities have refs pointing to other entities?"
But I'm not reporting this. I already solved my problem.
I'm reporting ":where order may change the results of the query"
As far I know, by design, change the :where order should only affect the performance, right?!#2021-07-0718:02ghadiit is not clear what happened in your first query. The error doesn't seem to correlate to the query#2021-07-0718:05souenzzoYes, related
Datomic find all ?a
for each ?a, find ?x and ?e
once ?e is used as a id at [?e :user/id], datomic tries to call resolve-id in ?e
But ?e is an instant/date#2021-07-0718:06ghadian entity can never be a date#2021-07-0718:07souenzzoExecution error (IllegalArgumentException) at datomic.core.datalog/resolve-id (datalog.clj:330).
Cannot resolve key: Sun Jun 13 05:00:44 BRT 2021
*e
=>
#error{:cause "Cannot resolve key: Sun Jun 13 05:00:44 BRT 2021",
:via [{:type clojure.lang.ExceptionInfo,
:message "processing clause: [?e :cs.model.monitored.machine-type/id], message: Cannot resolve key: Sun Jun 13 05:00:44 BRT 2021",
:data #:cognitect.anomalies{:category :cognitect.anomalies/incorrect,
:message "processing clause: [?e :cs.model.monitored.machine-type/id], message: Cannot resolve key: Sun Jun 13 05:00:44 BRT 2021"},
:at [datomic.core.datalog$throw_query_ex_BANG_ invokeStatic "datalog.clj" 50]}
{:type java.lang.IllegalArgumentException,
:message "Cannot resolve key: Sun Jun 13 05:00:44 BRT 2021",
:at [datomic.core.datalog$resolve_id invokeStatic "datalog.clj" 330]}],
:trace [[datomic.core.datalog$resolve_id invokeStatic "datalog.clj" 330]
[datomic.core.datalog$resolve_id invoke "datalog.clj" 327]
[datomic.core.datalog$fn__22911$bind__22923 invoke "datalog.clj" 442]
[datomic.core.datalog$fn__22911 invokeStatic "datalog.clj" 619]
[datomic.core.datalog$fn__22911 invoke "datalog.clj" 399]
[datomic.core.datalog$fn__22761$G__22735__22776 invoke "datalog.clj" 119]
[datomic.core.datalog$join_project_coll invokeStatic "datalog.clj" 184]
[datomic.core.datalog$join_project_coll invoke "datalog.clj" 182]
[datomic.core.datalog$fn__22834 invokeStatic "datalog.clj" 289]
[datomic.core.datalog$fn__22834 invoke "datalog.clj" 285]
[datomic.core.datalog$fn__22740$G__22733__22755 invoke "datalog.clj" 119]
[datomic.core.datalog$eval_clause$fn__23495 invoke "datalog.clj" 1460]
[datomic.core.datalog$eval_clause invokeStatic "datalog.clj" 1455]
[datomic.core.datalog$eval_clause invoke "datalog.clj" 1421]
[datomic.core.datalog$eval_rule$fn__23527 invoke "datalog.clj" 1541]
[datomic.core.datalog$eval_rule invokeStatic "datalog.clj" 1526]
[datomic.core.datalog$eval_rule invoke "datalog.clj" 1505]
[datomic.core.datalog$eval_query invokeStatic "datalog.clj" 1569]
[datomic.core.datalog$eval_query invoke "datalog.clj" 1552]
[datomic.core.datalog$qsqr invokeStatic "datalog.clj" 1658]
[datomic.core.datalog$qsqr invoke "datalog.clj" 1597]
[datomic.core.datalog$qsqr invokeStatic "datalog.clj" 1615]
[datomic.core.datalog$qsqr invoke "datalog.clj" 1597]
[datomic.core.query$q_STAR_ invokeStatic "query.clj" 619]
[datomic.core.query$q_STAR_ invoke "query.clj" 606]
[datomic.core.local_query$local_q invokeStatic "local_query.clj" 58]
[datomic.core.local_query$local_q invoke "local_query.clj" 52]
[datomic.core.local_db$fn__25618 invokeStatic "local_db.clj" 28]
[datomic.core.local_db$fn__25618 invoke "local_db.clj" 24]
[datomic.client.api.impl$fn__11642$G__11635__11649 invoke "impl.clj" 41]
[datomic.client.api.impl$call_q invokeStatic "impl.clj" 150]
[datomic.client.api.impl$call_q invoke "impl.clj" 147]
[datomic.client.api$q invokeStatic "api.clj" 393]
[datomic.client.api$q invoke "api.clj" 365]
[datomic.client.api$q invokeStatic "api.clj" 395]
[datomic.client.api$q doInvoke "api.clj" 365]
[clojure.lang.RestFn invoke "RestFn.java" 423]#2021-07-0718:07ghadican you make a reproducible case?#2021-07-0718:08ghadiincluding input form#2021-07-0718:27souenzzo(let [hello {:server-type :dev-local :system "hello" :db-name "hello"}
conn (-> (d/client hello)
(doto (d/create-database hello))
(d/connect hello))
tx-schema [{:db/ident :user/id
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
{:db/ident :address/user
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}]
{:keys [db-after]} (d/transact conn {:tx-data tx-schema})
{:keys [db-after]} (d/transact conn {:tx-data [{:db/id "a"
:user/id "a"}
{:address/user "a"}]})]
(d/q '[:find ?ident
:where
[?a :db/ident ?ident]
[?x ?a ?e]
[?e :user/id]]
db-after))#2021-07-0718:28souenzzoSometimes throw by date, sometimes by string
Execution error (IllegalArgumentException) at datomic.core.datalog/resolve-id (datalog.clj:330).
Cannot resolve key: Retract all facts about an entity, including references from other entities and component attributes recursively.
#2021-07-0718:35souenzzoExpected result
=> [[:address/user]]
#2021-07-0718:54souenzzoIs it enough?#2021-07-0720:57jdkealyIs it possible to run datomic inside a docker-compose? Everything seems to work fine, except my clojure app cannot seem to access datomic:#2021-07-0721:01favilais port 4335 open also? The dev storage is special because it’s also acting as storage (h2 sql) on port 4335.#2021-07-0721:05jdkealyoh gotcha! trying that thanks#2021-07-0721:06jdkealyyes, actually 4334,4335,4336#2021-07-0721:20tvaughanIf the clojure app is also running in a container, localhost won't work. You'll need to use the name of the running container, as well as place both on the same bridge network. No need to export the ports in this case either, fyi#2021-07-0721:27favila4336 is the h2 web console. if you don’t need it you could keep that closed#2021-07-0722:10Drew Verleere posting my question here in case something jumps out at anyone 🙂 My next step is to ... re-read the docs again? Then flush out the websocket functionality more, as i worry that my mock/toy example might be complicating things rather then simplifying them.
https://forum.datomic.com/t/where-do-i-view-output-of-datomic-ion-cast-alert/1892#2021-07-0723:25Joe LaneHey @drewverlee, a couple of things right off the bat:
• https://docs.datomic.com/cloud/operation/monitoring.html#searching-cloudwatch-logs-aws for viewing the cloudwatch logs for your solo system. The log group should be called datomic-{name-of-your-system}
• Per the https://docs.datomic.com/cloud/ions/ions-reference.html#entry-points table, Lambda ions MUST return either a String, InputStream, ByteBuffer, or File. Cast functions don't return any of those.
• I don't think you want to cast an alert, I think you want https://docs.datomic.com/cloud/ions/ions-monitoring.html#events. Also, custom metrics are not supported in solo (to keep the cost down).
• Your alert would not show up in the lambda logs, those have a completely separate log group than your system.
• You are on the right track! {:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-07-0723:25Joe LaneHey @drewverlee, a couple of things right off the bat:
• https://docs.datomic.com/cloud/operation/monitoring.html#searching-cloudwatch-logs-aws for viewing the cloudwatch logs for your solo system. The log group should be called datomic-{name-of-your-system}
• Per the https://docs.datomic.com/cloud/ions/ions-reference.html#entry-points table, Lambda ions MUST return either a String, InputStream, ByteBuffer, or File. Cast functions don't return any of those.
• I don't think you want to cast an alert, I think you want https://docs.datomic.com/cloud/ions/ions-monitoring.html#events. Also, custom metrics are not supported in solo (to keep the cost down).
• Your alert would not show up in the lambda logs, those have a completely separate log group than your system.
• You are on the right track! {:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-07-0723:38Drew Verlee> Your alert would not show up in the lambda logs, those have a completely separate log group than your system.
Ah, this is the issue.{:tag :div, :attrs {:class "message-reaction", :title "guitar"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🎸")} " 2")}
#2021-07-0723:59Drew VerleeThanks a ton btw{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-07-0814:12babardo👋 With datomic cloud, is there any way to enforce unique constraint in https://docs.datomic.com/cloud/schema/schema-reference.html#db-iscomponent entities?
(d/transact (env/get-conn)
{:tx-data [{:operation/id "test-id"
:operation/parameters
[{:operation.parameter/paramete-one "one"}
{:operation.parameter/paramete-one "one"}]}]})
With schema being [:operation/parameters :ref :many :component] and [:operation.parameter/paramete-one :string :one :unique], I can easily create duplicate parameters.#2021-07-0815:34jarrodctaylorWhen you query :operation/id “test-id” after executing the transact. What value do you get back for :operation/parameters?#2021-07-0821:48babardo(d/pull (d/db (env/get-conn)) '[*] [:operation/id "fichtre"])
=>
{:db/id 101155069755987
:operation/id "test-id"
:operation/parameters
[{:db/id 101155069755988, :operation.parameter/paramete-one "one"}
{:db/id 101155069755989, :operation.parameter/paramete-one "one"}]}#2021-07-0814:46cjsauerHey all, I recently downgraded my prod system to a dev system after playing around with it for a bit. I set the solo stack’s reuse storage setting to true, and everything seemed to go smoothly, but I keep running into this error while trying to connect from my laptop via the socks proxy:
; Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
; Loading database
I see https://docs.datomic.com/cloud/troubleshooting.html#loading-dbthat database loading errors have something to do with “not yet loaded the requested database into memory” and I should consider it retryable, but no amount of retrying seems to be working. I’ve tried temporarily scaling up my DDB capacity in case that was it, to no avail (the db barely has any data in it anyway). Any ideas what would cause this?#2021-07-0814:48cjsauerfacepalm#2021-07-0814:49cjsauerDisregard this whole message. The KMS key I was using for the old system had been accidentally disabled. sigh#2021-07-0814:48joshkhon a scale from tomorrow to never, will there eventually be the possibility to excise data in cloud? or is that entirely off the roadmap?#2021-07-0908:10tatutI have no knowledge, but have been operating under the "never" assumption#2021-07-0918:03ghadi@U0GC1C09L is your use-case for excision GDPR compliance?#2021-07-1122:01steveb8nId like to know this answer also. Yes for gdpr#2021-07-1300:49joshkh@U050ECB92 yes, i'm asking in the context of GDPR. i know this conversation comes up every now and again, but i thought i'd check back because i haven't heard anything regarding cloud excision in a while. i'm in the process of making some relatively hefty architectural changes to move/store certain data outside of datomic (we considered https://medium.com/magnetcoop/gdpr-right-to-be-forgotten-vs-datomic-3d0413caf102, but i think a separate datastore is our best option). it's just a shame that we have to fight against some amazing technology, Datomic Cloud, because it doesn't meet our compliance needs#2021-07-1302:01jaretHi @U0GC1C09L, We completely understand the need you are facing and are reviewing options for implementing functionality that satisfies GDPR use cases. However as you know, we do not currently have a feature to support the complete removal of data in Datomic Cloud. We have seen customers use a separate data store and the throwaway keys solutions that you mentioned.
In response to your "scale of tomorrow or never" roadmap question, I think its worth saying here that we do not publish a roadmap of Datomic feature development because we've made a commitment to stewarding the product according to technical best practices and we do not want the risk of customers developing against promised functionality that may turn out to be a bad idea of the product as a whole or be reprioritized in favor of more critical features. So as much as I'd love to give you a more satisfactory answer, I cant.
I will share this feedback with the team and reiterate that we understand the need here and feature requests like this are taken seriously as part of product management decision-making.#2021-07-1302:02jaretIf you think there is value in it, perhaps we could arrange a call to discuss what you are working on so I can give a full report to the team or perhaps make suggestions to your chosen solution. Just let me know, or open a case with <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>.#2021-07-1302:04jaretApologies for having to fight against Datomic, but GDPR has presented a difficult use case for a DB that is built on immutability.#2021-07-1307:55steveb8nI’m gonna end doing the same as @U0GC1C09L for customer/PII data. will probably use Crux. The risk is that Crux tempts me to move everything off of Datomic. Stating the obvious but it’s worth mentioning it when you discuss with the team.#2021-07-0920:54Drew VerleeHow can I resolve the error i get when i run datomic.ion.cast/event locally?
1. Unhandled java.lang.IllegalArgumentException
No implementation of method: :-event of protocol:
#'datomic.ion.cast.impl/Cast found for class: nil
I assume It has something todo with the context not being set, that i'm supposed to be redirecting the output to somewhere.#2021-07-0921:01Joe LaneRestart your repl, then make sure you run initialize-redirect.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-07-0921:06Drew VerleeAh, ok. I called initialize-redirect, but i didn't restart my repl.#2021-07-0921:08Joe LaneYou need to initialize-redirect before you cast anything otherwise it throws that forever because the nil caster is memoized. We've got this in our backlog.#2021-07-1003:21stuartrexkingDoes Datomic Ions have lifecycle hooks? I want to manage a DB connection pool and external services with system start or halt events. I can’t see anything in the documentation. How do I do this?#2021-07-1008:05mdaveIs it possible to pass a pull pattern as an input to a query?
For example this works:
(def p '*)
(d/pull (d/db conn) [p] 1)
And I'm trying to achieve something similar here:
(d/q '[:find (pull ?e [?p])
:in $ ?e ?p
:where
[?e :db/ident _]]
(d/db conn) 1 '*)
But getting:
Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:79). :db.error/invalid-attr-spec Attribute identifier ?p of class: class java.lang.String does not start with a colon
It's the same error when I try passing a keyword attribute or a the wildcard as a string "*".#2021-07-1008:55oxalorg (Mitesh)Can you try this:
(d/q '[:find (pull ?e p)
:in $ ?e p
:where
[?e :db/ident _]]
(d/db conn) 1 '[*]){:tag :div, :attrs {:class "message-reaction", :title "100"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💯")} " 3")}
#2021-07-1008:57oxalorg (Mitesh)https://docs.datomic.com/on-prem/query/query.html#pattern-inputs#2021-07-1009:07mdaveThank you!{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-07-1009:10mdaveActually I tried this version too, but with ?p. I didn't realize that question mark is not only a convention but it's interpreted in a different way in the queries.#2021-07-1009:19oxalorg (Mitesh)Yup this was confusing and I'm not a 100% sure why that is.
I think this is because ? symbol is for variables which the query engine must substitute for running the queries. But a pull-pattern is fixed and it doesn't make sense that the query engine should go and replace it.
But that's just a guess, if someone can chime in and clear this up it would be super helpful!{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-07-1116:07onetomI'm getting these messages quite often, 3 at a time, when I'm starting my REPL:
Downloading: org/clojure/clojure/maven-metadata.xml from datomic-cloud
It leads to a quite slow REPL startup:
$ time clj -M:test:dev -e "(time (require 'datomic.ion))"
Downloading: org/clojure/clojure/maven-metadata.xml from datomic-cloud
Downloading: org/clojure/clojure/maven-metadata.xml from datomic-cloud
Downloading: org/clojure/clojure/maven-metadata.xml from datomic-cloud
"Elapsed time: 56.775926 msecs"
clj -M:test:dev -e "(time (require 'datomic.ion))" 52.34s user 2.88s system 335% cpu 16.463 total
I saw on stack overflow that I might want to adjust <updatePolicy>daily</updatePolicy> in my ~/.m2/settings.xml, but some commenters said it didn't work for them, so they just disabled snapshot version resolution.
Since this phenomena does not occur on every startup, but I suspect only hourly, I'm not sure how to debug it.
Is there a recommended way to make these checks happen less often?#2021-07-1116:10onetomThe repo is defined as:
:mvn/repos
{"datomic-cloud" {:url ""}}
#2021-07-1121:50Alex Miller (Clojure team)what version of clj are you using?#2021-07-1121:52Alex Miller (Clojure team)clj won't use those updatePolicy settings so I don't think that will have any effect (should be daily for things like snapshots)#2021-07-1121:54Alex Miller (Clojure team)but what you're seeing here seems to be checking for clojure, a lib that should be found before that in other repos like maven central. it seems a little odd that you're even getting this at all (but maybe it's checking and rechecking because that metadata file is missing on the s3 maven repo)#2021-07-1121:55Alex Miller (Clojure team)if you have an old clj, it's possible some changes that have been put in would help. 1.10.3.855 is current stable release{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-07-1515:12onetomI've upgraded to 1.10.3.855, but that didn't help.
then i've updated the clojure version in my deps.edn from 1.10.1 to 1.10.3 and now it's not looking for org/clojure/clojure/maven-metadata.xml on datomic-cloud.
HOWEVER, i've just moved from martian/martian {:mvn/version "0.1.16"} to com.github.oliyh/martian {:mvn/version "0.1.17-SNAPSHOT"} and now im getting similar messages, roughly every 10-15 minutes, when I start a fresh Clojure CLI process, even just clojure -A:dev:test -Stree:
Downloading: com/github/oliyh/martian/0.1.17-SNAPSHOT/maven-metadata.xml from datomic-cloud
Downloading: com/github/oliyh/martian-clj-http/0.1.17-SNAPSHOT/maven-metadata.xml from datomic-cloud
Downloading: com/github/oliyh/martian-httpkit/0.1.17-SNAPSHOT/maven-metadata.xml from datomic-cloud
Downloading: martian/martian/0.1.17-SNAPSHOT/maven-metadata.xml from datomic-cloud
Downloading: martian/martian/0.1.17-SNAPSHOT/maven-metadata.xml from datomic-cloud
Downloading: com/github/oliyh/martian-clj-http/0.1.17-SNAPSHOT/maven-metadata.xml from datomic-cloud
Downloading: com/github/oliyh/martian-httpkit/0.1.17-SNAPSHOT/maven-metadata.xml from datomic-cloud
Downloading: martian/martian/0.1.17-SNAPSHOT/maven-metadata.xml from datomic-cloud
Downloading: com/github/oliyh/martian/0.1.17-SNAPSHOT/maven-metadata.xml from datomic-cloud
what's weird, is that the same artifact is reported multiple times.
is that some retry logic?#2021-07-1515:15Alex Miller (Clojure team)this may just be an artifact of the snapshot update logic. if it's checking every repo every time, it's never going to find that metadata file so it has to recheck again later{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-07-1515:17Alex Miller (Clojure team)maven has the ability to mark repos as whether they even have snapshots and that would help here (we don't expose that option)#2021-07-1516:45onetomI just read through https://maven.apache.org/settings.html#repositories
That <repository> <snapshots> <enabled>false or <updatePolicy>daily</updatePolicy> would be a nice improvement to support.
Now that a bare Clojure terminal REPL starts up in less than a second, getting rid of these other sources of "slow Clojure startup" would be a great quality of life improvement, especially for early stage projects, where REPL startups are more frequent.
I'm even looking into Emacs and inf-clojure again, beacause NOT using nREPL would shave another 1.7s from the startup time...#2021-07-1516:47onetomIt occured to me that I should have looked into where this might have been reported, and indeed you have already said:
> We don't currently expose the update policy settings of repositories but that's probably reasonable to do.
here: https://clojure.atlassian.net/browse/TDEPS-97 , which is continued on this still open ticket, with patches, to support the maven repository options:
https://clojure.atlassian.net/browse/TDEPS-101#2021-07-1516:58Alex Miller (Clojure team)I don't know that I want what's in those patches exactly, which is why this hasn't been moved forward#2021-07-1516:59Alex Miller (Clojure team)but it's helpful to me to know your experiences above#2021-07-1121:12Drew VerleeI logged the the request sent to my websocket handler and it included a hashmap with the key "input" and "Context". "Input" in the log contains a json payload "{\"headers...". Which ideally would be serialized to edn, but regardless, i would expect calling (get request "Input") to return that string, instead, in the logs, im seeing it return null.
I'm basically trying to do whats outlined here
https://www.freecodecamp.org/news/real-time-applications-using-websockets-with-aws-api-gateway-and-lambda-a5bb493e9452/
And so i believe i should be able to translate :
event.requestContext.connectionId;
into (get-in event ["requestContext" connectionId"]
Here is the handler:
(defn handler [request]
(cast/event {:msg "WebSocketRequest"
::request request
::input (get-in request ["Input"] )
::keys (keys request)
::type (type request)})
{:status 200 :body "foobar" })
And in the otuput we see the key "Input" but when i try to get the value, and log that, it comes back null.
{
"TomattoBackendDatomicIonWebsocketKeys": [
"Context",
"Input"
],
"TomattoBackendDatomicIonWebsocketRequest": {
"Context": <removed>
"Input": "{\"headers\": ...}"
},
"TomattoBackendDatomicIonWebsocketInput": null,
}
#2021-07-1200:29Joe Lane@U0DJ4T5U1 check https://docs.datomic.com/cloud/ions/ions-reference.html#entry-points
Input and context are keywords. Of the request map. The value of :input is a json string so you’ll need to parse that. #2021-07-1201:06Drew Verlee@U0CJ19XAM I agree, the logs are saying the value of Input is a json string. But that string should still be returned as the value of (get-in request ["Input"]) right?#2021-07-1201:07Joe LaneInput is a keyword. See the docs I linked to.
:input#2021-07-1201:08Joe Lane(defn echo
[{:keys [context input]}]
input)#2021-07-1201:11Drew Verleeah, ok. I'll check the docs but i take your meaning.#2021-07-1201:45Joe LaneThe casted log messages can be a little misleading because they are required to be json. A consequence of this is that all keywords are converted to strings. #2021-07-1202:07Drew VerleeIt's reasonable, I'm not sure why I didn't think of that.#2021-07-1217:43zendevil.ethI’m trying to run Datomic https://github.com/alexanderkiel/datomic-free and my clojure server as two services using docker compose locally, and this is my compose file:
version: '3'
services:
datomic:
image: akiel/datomic-free
environment:
- ADMIN_PASSWORD="admin"
- DATOMIC_PASSWORD="datomic"
web:
build: .
ports:
- "3001:3000"
This was working without docker where I was running the transactor on the machine directly using the local dev instructions on the datomic website, but using docker compose, I get the following error when making a request to my server that involves a database transaction:
https://gist.github.com/zendevil/ce069eb7375ede709c2e4ebbd3c2ef3b
Relevant code:
(def db-uri "datomic:")
(def *conn
"Get shared connection."
(delay (d/connect db-uri)))
(defn install-schema
"This function expected to be called at system start up.
Datomic schema migrations or db preinstalled data can be put into 'migrations/schema.edn'
Every txes will be executed exactly once no matter how many times system restart."
[]
(prn "installing schema")
(let [schema-map (read-string (slurp "resources/migrations/schema.edn"))]
(prn "schema map" schema-map)
@(d/transact (force *conn) (:creator-schema schema-map))
@(d/transact (force *conn) (:creation-schema schema-map))
))
(defn init-db [_]
(prn "database created " (d/create-database db-uri))
(try (install-schema)
(catch Exception e (prn "installing schema" e)))
(r/response "ok")
)
The http request calls init-db#2021-07-1218:19Drew Verleethe error is saying the jdbc client cant reach mysql. So somehow that address is wrong.#2021-07-1218:19Drew Verleemaybe its supposed to be datomic:4335?#2021-07-1218:20Drew Verleebased off these docs https://docs.docker.com/compose/networking/#2021-07-1218:21Drew Verleetry
(def db-uri "datomic:")
instead#2021-07-1219:08zendevil.eththat gives:
Caused by: org.h2.jdbc.JdbcSQLException: Wrong user name or password [28000-171]#2021-07-1219:24Drew Verleewell it connected, so the next step is to give it the right username and password.#2021-07-1219:40zendevil.ethdo you see anywhere here to give a username or password?
https://github.com/alexanderkiel/datomic-free#2021-07-1221:00Drew Verleeit's currently passed in the db-url or at least the password is#2021-07-1221:03Drew Verleewhat your passing looks correct in that it matches the grammar of the string on the githubpage#2021-07-1221:03Drew Verleesecurity is a PITA because it's not specific about whats wrong. e.g password or user (of course it can't know, but it also doesnt want to share)#2021-07-1221:04Drew Verlee(def db-uri "datomic:")
maybe?#2021-07-1222:40jdkealyIf I'm using dynamodb in prod, can i also use dynamodb locally? Should I be? Is it seamless to restore a dynamo db into local if the storage engines are different ?#2021-07-1223:05jdkealyAh i see the dynamodb local storage. I didn't know that was an option. Is that recommended ?#2021-07-1316:56jaretUsing Datomic on-prem https://docs.datomic.com/on-prem/operation/backup.html to move across storages is a great feature of backup and restore. However for https://docs.datomic.com/on-prem/operation/backup.html#other-storages we recommend running with all system processes down, run the restore, then start the transactor and peers.#2021-07-1317:55jaretDatomic Cloud 884-9095 now available! https://blog.datomic.com/2021/07/Datomic-Cloud-884-9095-New-tiers-and-internet-access.html#2021-07-1318:03kennyAwesome!! Still reading through everything. I have one suggestion on the https://docs.datomic.com/cloud/whatis/configurations-and-pricing.html. It uses reserved instances. AWS has pretty much deprecated reserved instances, replacing them with savings plans. I’d suggest using savings plan rates instead. {:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-07-1318:23danierouxAPI Gateway automation for ions and clients
Yes! I'll be deleting hand rolled Terraform config with glee soon
Run and scale analytics anywhere
Yes please!{:tag :div, :attrs {:class "message-reaction", :title "grin"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😁")} " 6")}
#2021-07-1320:20kennyWas m5.large removed from the instance types?#2021-07-1320:39Joe Lane@U083D6HK9 Yes. Yes it was 🙂. You should look to t3.xlarge or t3.2xlarge instances. Note, that the t3.2xlarge instances have more vCPUs than i3.xlarge#2021-07-1320:39kennyHow come? The m5 family has very different characteristics from the t3 family.#2021-07-1321:40emAbsolutely fantastic upgrades, makes it 10x easier to recommend Datomic, especially for people looking for loads in between the previous large gap of Solo and Production{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 2")}
#2021-07-1323:45Drew Verlee@UNRDXKBNY can you elaborate?#2021-07-1323:57kennyI'm really liking how the "solo" topology constraints do not exist anymore! It makes testing production like things so much easier.#2021-07-1400:10Drew Verlee@U083D6HK9 i'll read the post.#2021-07-1400:11Drew Verlee> Datomic no longer has a Solo compute stack. If you were using Solo you can upgrade to Production at no additional cost by peforming the following steps:#2021-07-1407:35em@U0DJ4T5U1 Mostly that previously the minimum production cost ran around $400 a month, and that the solo topology didn't have a load balancer and nice things like HTTP direct that came with it. Now we have the best of both worlds - HA setups etc. at much more reasonable prices in the $50-$100 month a range for smaller businesses that didn't need 2 i3.large instances.#2021-07-1413:49Drew Verleeah ok. Hopefully a bit less then 50$ if it's the same price as solo. Which is around 35.#2021-07-1413:55Joe Lane@U0DJ4T5U1 https://docs.datomic.com/cloud/operation/growing-your-system.html#instance-sizes#2021-07-1318:09zalkyHey all, running into an issue where two datoms whose components (e a v t op) are all the same, and should be redundant, are said to be in conflict. The issue seems to be that the value (v) is a serialized byte array. I'm using https://github.com/ptaoussanis/nippy to serialize to a byte array. Are my expectations off that Datomic can tell that two serialized values are the same, or is there some other underlying issue here?#2021-07-1318:26zalkyI think I found a relevant section of the Datomic docs:
https://docs.datomic.com/on-prem/schema/schema.html#bytes-limitations
It says that attribute values of type byte cannot have value semantics, an implication of which is that you also cannot resolve to equivalent datoms apart.#2021-07-1320:43ghadiif you're using bytes to store large blobs, this is generally a bad idea#2021-07-1320:44ghadiit all depends on what your app-level semantics are#2021-07-1403:26zalkyThanks @U050ECB92 for the response. The use case is for small blobs, in a very limited context. Normally we would not require value semantics but we ran into this edge case with redundant data. We were able to workaround the problem in application layer with some additional constraints.#2021-07-1320:33kennyIn 884-9095 the socks proxy was replaced with api gateway:
> Replaced: The socks proxy is no longer available; clients can connect directly to the client API Gateway.
It's probably worth noting in the release notes that if you run queries that take longer than 30s, these will now time out. API Gateway has a https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html#http-api-quotas, which cannot be increased.#2021-07-1413:54stuarthallowayOut of curiosity, do you regularly run such long queries in development?#2021-07-1415:19kennyWe use dev-local in development, so slightly different situation. We do have 2 large queries that can take 30s+ under a high load situation.#2021-07-1320:44kennyFrom the https://blog.datomic.com/2021/07/Datomic-Cloud-884-9095-New-tiers-and-internet-access.html#_lower_pricing_at_all_scales, there is this sentence:
> All instances sizes now cost less to run... If you are running production, your cost http://docs.datomic.com/cloud/changes.html#884-prod...
I followed that link and it just takes me to "884-9095 for Production Users". It's not clear how all instance sizes cost less now. How did the cost decrease for an instance type that was previously available (e.g., t3.large for query group)? I have seen https://docs.datomic.com/cloud/operation/growing-your-system.html#hourly-price, but that just seems to list the regular On-Demand cost for the given instance types, which will not have changed from the last release.#2021-07-1320:47kennyPerhaps the Datomic license cost decreased? If so, where can I find a table for that? IIRC, previously it was the same price per hour as the instance type you chose.#2021-07-1320:50kennyFrom the https://docs.datomic.com/cloud/changes.html#884-9095, what exactly causes the "up to one minute of downtime"?
> This upgrade will cause up to one minute of downtime for each compute group. Make sure to perform this upgrade at a time that minimizes the impact of these disruptions.#2021-07-1413:53stuarthallowaySwitching from an NLB to an ALB.{:tag :div, :attrs {:class "message-reaction", :title "ok_hand::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-07-1415:20kennyCurious why it wouldn't be a no downtime switchover?#2021-07-1320:59kenny> If you manually created an API Gateway for your ion application on a previous release of Datomic, that gateway will no longer work.
We are definitely in this scenario. It will no longer work because the update requires a new LB?#2021-07-1321:02kenny> Enhancement: The storage template now sets DDB provisioning to fit within the AWS free tier if your usage is low enough.
> https://docs.datomic.com/cloud/changes.html#884-storage
We have manually modified our DDB capacity mode to On-demand since it is a much better fit for our workloads. Will this storage update impact that setting? Will we need to go back and manually set it again?#2021-07-1413:53stuarthallowayIf you are running the production template, you do not need to do a storage upgrade at all.#2021-07-1415:21kennyOk. If I did do the storage update, would it impact that setting?#2021-07-1419:41kennyI ask because at some point in the future, we’ll need to do a storage update. At that point, it’s unlikely we’ll remember that that update could impact the capacity mode. #2021-07-1322:07kennySince the client API Gateway is exposed to the internet, how does access control work for it?#2021-07-1413:06Robert A. Randolphhttps://docs.datomic.com/cloud/operation/access-control.html#how-datomic-access-control-works#2021-07-1415:22kennyAh, it's using Datomic's auth mechanism. I do wonder if this opens the door to DOS attacks. If someone had direct access to your client API endpoint, could they bring down your system by rapidly sending requests?#2021-07-1322:15kennyI see the EndpointAddress format has changed in a backwards incompatible way. .<compute group>.<region>. to .<compute group>.<region>. . This will require all client applications to also update their endpoints. I don't see this noted in the changelog, but it seems like a critical piece to know.
EDIT:
Actually, this doesn't seem entirely true. I see a Route 53 entry for an upgraded "Solo" system where the old entry. record is pointing directly to the IP address of the 1 node in the system. Not sure what that means for production topologies.#2021-07-1515:08kennyfyi, in a prod situation, it seems like it manually added N IP addresses to the entry. record. Unclear if those entries are continuously updated.#2021-07-1322:49kennyIs the recommended :endpoint for a client application inside the Datomic VPC the value for the EndpointAddress CF stack output?#2021-07-1322:51kennyhttps://docs.datomic.com/cloud/operation/growing-your-system.html#query-group links to https://docs.datomic.com/cloud/operation/query-groups.html, which is 404'ing.{:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 2")}
#2021-07-1400:19Drew VerleeIs this advice on how to setup a websocket the recommended path? https://forum.datomic.com/t/websockets-from-ions/1255/4
Should we rely on :datomic.ion.edn.api-gateway/datato have the requestContext and connectionId? If not, someone should drop a note there.#2021-07-1421:35jaret@U0DJ4T5U1 we are working on some advice given the changes in the new release. I’ll be sure to cross post when it’s ready.#2021-07-1521:56Drew Verlee@U1QJACBUM that sounds great, ill be looking for it :0#2021-07-1410:52robert-stuttaforddoes anyone have code handy to roundtrip transactor functions by reading them out of the database and then re-transacting them?
need a way to take all existing db/fns and stick them in an in memory database for tests to use.#2021-07-1417:35kennyCurious why the new Cloud https://docs.datomic.com/cloud/operation/compute-template.html#parameters uses yes/no for a boolean instead of true/false like the https://docs.datomic.com/cloud/operation/storage-template.html#parameters.#2021-07-1421:36jaretYou mean like yes/no for client API?#2021-07-1422:01souenzzoyes @U1QJACBUM
ClientAPI Client API yes/no
Ions Ions yes/no
{:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 2")}
#2021-07-1423:38kennyYeah. Just seems odd that a new naming scheme was introduced. Was wondering what the thinking was.#2021-07-1514:09jaretSo I asked about this on standup and in summary "we like yes/no better, but have a principal of being backwards compatible." The joke was made, that the real question is why we didn't change reuse storage from true/false to yes/no. The original yes/no decision started when considering we were asking the user what they want for bastion/client-api/other considerations. We'll use yes/no going forward without changing that storage prompt.#2021-07-1514:17kennyHaha, thanks for answering 🙂#2021-07-1515:31souenzzo@U1QJACBUM in yaml if you write ClientApi: no it will parse as :ClientApi false
https://yaml.org/type/bool.html
I personally do not use YAML and hope to never use it. But it's very common in infrastructure/AWS to use yaml on things.
Maybe is a good idea switch from yes/no to "yes"/"no" on docs#2021-07-1421:22FabimThe [pricing table](https://docs.datomic.com/cloud/operation/growing-your-system.html#monthly-price) lists monthly prices. Does this mean that Ions now has a monthly cost even I scaled down all (my solo) instances?#2021-07-1421:33jaretYou are still billed hourly. These monthly prices assume you are running for the entire month they are not a special new unit for pricing.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-07-1423:39kennyDoes the primary compute group need to run on i3.large instances?#2021-07-1513:45jaretNo, but you only get valcache in the i3 family. https://docs.datomic.com/cloud/operation/growing-your-system.html#valcache{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-07-1514:20kennyWe successfully moved to 884-9095! It took about a full day of work. Downtime was longer than 1 minute, less than 1 hour, due to removing the old, self-managed api gateway in place of the Datomic managed one and redeploying the app (realized this could have been avoided after the fact). Seriously awesome update Datomic team. Thanks for making our lives easier.#2021-07-1514:44Tyler Nisonoff🥳 any advice to others about to embark in this migration?
I’m currently trying to figure out how to minimize downtime… given we currently use a socks proxy, I’m guessing I’ll have to upgrade datomic, then change the app to use the new endpoint, redeploy the app, and incur downtime during that period?#2021-07-1514:55kennyI think it is very domain specific. Our API gateway downtime was due to needing to sync multiple service updates to point to the new API Gateway endpoint. That could have been minimized by using dns & switching over with the push of a button.
Curious, you're using the socks proxy in prod?#2021-07-1515:01Tyler Nisonoffi am right now (although its pre-launch 😛 )
as you say that, I realized I could avoid the socks proxy in prod by using VPC peering, and just have avoided doing that, so maybe i’ll just upgrade and fix the connection in one go#2021-07-1515:01Tyler Nisonoff(the datomic client is running in a separate VPC atm, so the socks proxy was a quick way to get access)#2021-07-1515:05kennyOh I see. IMO, that's pretty sketchy 🙂 You have a single point of failure -- the bastion host. I don't think the socks proxy was intended to be used in a prod HA environment.#2021-07-1515:06Tyler Nisonoffyeah definitely sketch 🙂
Talking this through, I think if I fix that first, the upgrade will be more straightforward — so I’ll start by getting proper Inter-VPC access working first
Thanks 🙂{:tag :div, :attrs {:class "message-reaction", :title "duckie"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "duckie", :src "https://emoji.slack-edge.com/T03RZGPFR/duckie/ac65a3a48d8edce0.png"}, :content nil})} " 2")}
#2021-07-1515:11stuarthalloway@U016TR1B18E you will not need the VPC peering at all post-upgrade, are you making unnecessary work?#2021-07-1515:12Tyler Nisonoffpossibly! re-reading the new release instructions and was seeing that its quite different now, so I think the right thing may be:
1. upgrade
2. fix inter-vpc connection
and the little bit of downtime isn’t problematic for me#2021-07-1515:13Tyler Nisonoffseems like I’d want to use the new VPC Endpoint#2021-07-1515:14kennyI launched a 884-9095 query group stack with the "Default" metrics selection accidentally selected. Generally, I don't trust defaults since they always seem to change without notice, so I prefer to be explicit about what I select. In my situation, I cannot change from Default to Detailed because the CF stack detects no changes (in my case the Default is Detailed). I could change from Default -> Basic -> Detailed as a workaround.#2021-07-1521:14redingerYeah, this is because the CloudFormation evaluates the value of the metrics choice before doing an update. In your case, there was no change.
Making any kind of change that results in a changeset would have caused the stack to update, which you discovered by change to Basic and back. Changing the metrics choice this way also caused your instances to restart, because the metrics setting is passed to the instances.
To avoid restarting instances, an alternative solution is to change something like the MaxSize. Temporarily bump it higher, and do the metrics change, then you can lower the MaxSize again. Changing the MaxSize wouldn’t cause instances to cycle.
Thanks for the feedback, I’ll give some thought to how we might improve this scenario.{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-07-1521:57kennyThanks for that info @U0508TN2C.#2021-07-1604:59robert-stuttaford@jaret is it just me or is there some documentation missing here?
https://docs.datomic.com/on-prem/schema/schema-change.html#altering-avet-index
i came to read this to confirm something: to add uniqueness to an attr that has existing non-unique values, i can retract the non-unique values and then add uniqueness? i don't have to excise them?#2021-07-2014:28robert-stuttafordjust following up on this one, @jaret? 🙂#2021-07-2015:36jaretYes @U0509NKGK you just need to retract. If there are values present for that attribute, they must be unique in the set of current assertions.#2021-07-2015:37jaretYou do not need to and should not excise them.#2021-07-2106:01robert-stuttaford@jaret thanks pal! still think your documentation is missing some key paragraphs there, though 🙂#2021-07-2106:01robert-stuttaford"... so the two will be described together."
<page ends>#2021-07-1605:00robert-stuttafordi came to read this to confirm something: to add uniqueness to an attr that has existing non-unique values, i can retract the non-unique values and then add uniqueness? i don't have to excise them?#2021-07-1609:27Ben HammondI am trying to understand datomic ions :http-direct
I am running 884-9095
https://docs.datomic.com/cloud/ions/ions-tutorial.html talks about using a url like
curl https://$(IonApiGatewayEndpoint)/datomic -d :hat
and I dont understand the /datomic part of the route; how does this fit in?
Should the http entry point function contain some dispatch logic to process the incoming URL?
is this somehow handled implicitly within ApiGateway?#2021-07-1609:28Ben HammondI get this error message on my experiment
{:cause "No op uniquely specified in x-nano-op header or in URI path"}#2021-07-1609:29Ben Hammondwhich presumably is telling me that I dont have my dispatch logic set up#2021-07-1609:31Ben HammondIt doesn't seem obvious how I am supposed to set it up though#2021-07-1610:01Ben Hammondwhen I post to the other api gateway, the one with ions in its name I get
HTTP/1.1 500 Internal Server Error
Date: Fri, 16 Jul 2021 10:01:04 GMT
Content-Type: text/plain
Content-Length: 20
Connection: keep-alive
server: Jetty(9.4.36.v20210114)
apigw-requestid: CjrTIierrPEEPdQ=
Ion execution failed
Response code: 500 (Internal Server Error); Time: 8282ms; Content length: 20 bytes
#2021-07-1610:06Ben Hammondits supposed to return a hardcoded
{:status 200
:headers {"Content-Type" "application/edn"}
:body {:it :worked}}#2021-07-1611:05Ben Hammondhttps://docs.datomic.com/cloud/ions/ions-tutorial.html#configure-entry-points
> the `:http-direct` section specifies functions that will be callable via HTTP
but it looks like you can only set up a single http-direct function per ion-config file; is that correct??#2021-07-1611:17Ben Hammondso I followed
https://docs.datomic.com/cloud/troubleshooting.html#http-500
and I see the message
{
"Msg": "IonHttpDirectException",
"Ex": {
"Via": [
{
"Type": "java.lang.IllegalArgumentException",
"Message": "No implementation of method: :->bbuf of protocol: #'datomic.java.io.protocols/ToBbuf found for class: clojure.lang.PersistentArrayMap",
"At": [
"clojure.core$_cache_protocol_fn",
"invokeStatic",
"core_deftype.clj",
583
]
}
],
"Trace": [
[
"clojure.core$_cache_protocol_fn",
"invokeStatic",
"core_deftype.clj",
583
],
[
"clojure.core$_cache_protocol_fn",
"invoke",
"core_deftype.clj",
575
],
[
"datomic.java.io.protocols$fn__3594$G__3589__3599",
"invoke",
"protocols.clj",
12
],
[
"$__GT_bbuf",
"invokeStatic",
"io.clj",
20
],
[
"$__GT_bbuf",
"invoke",
"io.clj",
17
],
[
"clojure.core$update",
"invokeStatic",
"core.clj",
6185
],
[
"clojure.core$update",
"invoke",
"core.clj",
6177
],
[
"datomic.ion.http_direct$encode_body",
"invokeStatic",
"http_direct.clj",
66
],
[
"datomic.ion.http_direct$encode_body",
"invoke",
"http_direct.clj",
61
],
so,
is it not sufficient to retunr a plain old persistent map for an http-direct call?#2021-07-1611:27Ben HammondI am trying' to return
{:status 200
:headers {"Content-Type" "application/edn"}
:body {:it :worked}}
maybe its that :body value that is the problem{:tag :div, :attrs {:class "message-reaction", :title "face_palm"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-07-1611:35Ben Hammonder, so I think the lesson I learned is that
> Lambda ions https://docs.datomic.com/cloud/ions/ions-reference.html#lambda-ion a String, InputStream, ByteBuffer, or File.
also applies to the :body of an http-direct response#2021-07-1611:35Ben Hammonddo let me know if that is not correct#2021-07-1611:41souenzzoIt's more like a #pedestal question maybe
You can see here the supported bodies
https://github.com/pedestal/pedestal.ions/blob/master/src/io/pedestal/ions.clj#L21{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-07-1611:45Ben Hammondoh is it pedestal running behind the scenes?
that is helpful to know#2021-07-1611:47souenzzoHumm... not sure now.
When I started with datomic ions, pedestal was the way to go. I will re-read the docs.#2021-07-1611:48Ben Hammondbut
(supers (class {:hello :world}))
=>
#{clojure.lang.AFn
java.lang.Runnable
java.lang.Iterable
clojure.lang.IHashEq
clojure.lang.IMapIterable
java.util.Map
clojure.lang.Seqable
java.io.Serializable
clojure.lang.IObj
clojure.lang.IEditableCollection
clojure.lang.ILookup
clojure.lang.Associative
clojure.lang.APersistentMap
clojure.lang.IKVReduce
clojure.lang.IPersistentCollection
clojure.lang.IFn
java.lang.Object
clojure.lang.MapEquivalence
java.util.concurrent.Callable
clojure.lang.IMeta
clojure.lang.IPersistentMap
clojure.lang.Counted}
and
(extend-protocol IonizeBody
...
clojure.lang.IPersistentCollection
(default-content-type [_] "application/edn")
(ionize [o] (pr-str o))
...
which would have been fine#2021-07-1611:49Ben Hammondbut the protocol mentioned in the error log was
.protocols/ToBbuf#2021-07-1611:50souenzzo@U793EL04V sorry for missleading
Instead of wiring a web handler to a single function, use a routing library such as Compojure or pedestal.
^ In the end of this tutorial there is this phrase
You are using the "raw ions handler", that is not related with the code that i sent to you#2021-07-1611:53Ben HammondI think I am using the HTTP Direct Ions Handler
https://docs.datomic.com/cloud/ions/ions-tutorial.html#http-direct#2021-07-1611:55Ben HammondI would like to know how you are supposed to support multiple routes from the http-direct ions handler; do I arrange a routing library into that http-direct handler-fn, or is something already built in?#2021-07-1612:07souenzzoions http handler is a single ring function.
You should use compojure, pedestal, or any other ring routing library#2021-07-1612:11souenzzoUsing pedestal for example, the same service-map that you will use on ions ring handler, you can use to develop at localhost using jetty{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-07-1613:27Ben HammondThat's helpful
thanks Enzzo#2021-07-1610:06Björn EbbinghausWhy does count for something empty, like in (d/q '[:find (count ?e) . :where [?e :db/ident :something]] db)) return nil and not 0?#2021-07-1610:11favilaAggregation functions (count in this case) are not called when there are no results. The result set is #{} (set of no tuples). No tuples means no aggregation functions called. The . calls ffirst (ie first slot from first tuple) which is nil.#2021-07-1611:04Björn EbbinghausThank you, for the technical answer. 🙂
But I really think count should return 0 for an empty set, should it not? It feels so... wrong…#2021-07-1611:21robert-stuttafordthey ain't gonna change it now 🙂#2021-07-1612:52stuarthallowayso different aggregates should work differently for empty sets?#2021-07-1612:54stuarthalloway@U09R86PA4’s point is not merely technical, it is consistent across all aggregates.#2021-07-1613:36favila@U4VT24ZM3 The semantics of what you want are unclear in the general case. Ignore the . syntax for now, because it’s just syntax sugar for an ffirst on the query result. The result set after :where and :with is a tuples (one per slot in the :find), unaggregated. Aggregation takes the unaggregated :find slots and produces another set with the aggregated columns unbound, and then the aggregation functions are called with what the correlated, unbound values would be, and the result of the aggregation is placed into it. Some code to illustrate:
;; :find before aggregation
(sort (d/q '[:find ?l ?n
:where [?l ?n]]
(into
(mapv vector (repeat "a") (range 5))
(mapv vector (repeat "b") (range 5)))))
=> (["a" 0] ["a" 1] ["a" 2] ["a" 3] ["a" 4] ["b" 0] ["b" 1] ["b" 2] ["b" 3] ["b" 4])
;; :find collecting sets for aggregation
(sort (d/q '[:find ?l (vec ?n)
:where [?l ?n]]
(into
(mapv vector (repeat "a") (range 5))
(mapv vector (repeat "b") (range 5)))))
=> (["a" [0 1 2 3 4]] ["b" [1 2 3 4 0]])
;; aggregation value substituted in place
(sort (d/q '[:find ?l (count ?n)
:where [?l ?n]]
(into
(mapv vector (repeat "a") (range 5))
(mapv vector (repeat "b") (range 5)))))
=> (["a" 5] ["b" 5])#2021-07-1613:37favilaso aggregation always has the shape of reducing one-or-many values to one value to place into an existing tuple. It can’t make new tuples in the result set, only reduce#2021-07-1613:37favilaso if (count ?n) returned 0, into what tuple would it be placed?#2021-07-1613:39favilaHere’s the same code example as above, but with an empty result set:
;; :find before aggregation
(sort (d/q '[:find ?l ?n
:where [?l ?n]
;; not in result set
[(ground "Z") ?l]]
(into
(mapv vector (repeat "a") (range 5))
(mapv vector (repeat "b") (range 5)))))
=> ()
;; :find collecting sets for aggregation
(sort (d/q '[:find ?l (vec ?n)
:where [?l ?n]
;; not in result set
[(ground "Z") ?l]]
(into
(mapv vector (repeat "a") (range 5))
(mapv vector (repeat "b") (range 5)))))
=> ()
;; aggregation value substituted in place
(sort (d/q '[:find ?l (count ?n)
:where [?l ?n]
;; not in result set
[(ground "Z") ?l]]
(into
(mapv vector (repeat "a") (range 5))
(mapv vector (repeat "b") (range 5)))))
=> ()#2021-07-1613:40favilaHow could count produce 0 in this case?#2021-07-1613:40favilaThe only way to get what you want is to make aggregations “special” somehow when there are no unaggregated items in the :find#2021-07-1611:35Ben Hammonder, so I think the lesson I learned is that
> Lambda ions https://docs.datomic.com/cloud/ions/ions-reference.html#lambda-ion a String, InputStream, ByteBuffer, or File.
also applies to the :body of an http-direct response#2021-07-1611:55Ben HammondI would like to know how you are supposed to support multiple routes from the http-direct ions handler; do I arrange a routing library into that http-direct handler-fn, or is something already built in?#2021-07-1613:58kennytiltonBackground: a year or so ago I was messing with Datomic and installed some its software.
Now I am trying to run examples from the Helix react native world. No connection, right? But:
$ cd helix-react-native
~/helix-react-native (master) $ make shadow
clj -m shadow.cljs.devtools.cli watch dev
Error building classpath. Could not find artifact com.datomic:dev-local:jar:0.9.203 in central (https://repo1.maven.org/maven2/)
make: * [shadow] Error 1
Strange, but I am no maven/jar/npm expert. I will be happy to uninstall anything Datomic related for now, but just not sure how to track down who has that dependency. Any tips welcome. Thx! 🙏#2021-07-1613:59Alex Miller (Clojure team)probably something in ~/.clojure/deps.edn#2021-07-1613:59Alex Miller (Clojure team)or ~/.m2/settings.xml{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 2")}
#2021-07-1614:31kennytiltonBingo. Thx, @alexmiller! I never mess with ~/.clojure/deps.edn, must have been in the Datomic install as a suggestion.#2021-07-1614:42Tatiana KondratevichHey! Perhaps someone has already tried IonApiGateway? Can't set up authentication correctly for this connection. I assumed that this was related to the IAM, but I cannot understand how to correctly configure it then so as not to receive 502. And in general, is it correct to use it to connect?#2021-07-1614:48Joe LaneHow are you setting up authentication?#2021-07-1614:55Tatiana Kondratevich@U0CJ19XAM Earlier I used access gateway with IAM. I dont understand if I can use IAM here(in IonApiGateway) or I need use JWT or another way to auth.#2021-07-1614:58Joe LaneCan you show an example your client config map?#2021-07-1615:01Joe LaneFWIW, you shouldn't have to configure anything in API Gateway to "set up authentication", that should be handled for you separately from an api gateway authorizer.#2021-07-1615:13Tatiana KondratevichMy example client config map:
{:server-type :ion
:region "eu-central-1"
:system "*"
:endpoint "*.http://eu-central-1.amazonaws.com"
}#2021-07-1615:17Tatiana KondratevichWhere I can set up this if not in api gateway? I try do curl from terminal with my IAM user but its dont work.#2021-07-1615:28Joe LaneDo you have an aws profile established for your user on your laptop?
If so, attach it to :creds-profile "your creds-profile" in the map.#2021-07-1615:30Tatiana KondratevichYes I have aws profiles. And how after this I can test this right? I add :creds-profile "your creds-profile"#2021-07-1615:35Joe LaneWait, you're trying to connect your client to the IonApiGateway, shouldn't you be using ClientApiGatewayEndpoint#2021-07-1615:39Tatiana KondratevichWait, Am I need add ClientApiGatewayEndpoint to :endpoint for use curl https://$(IonApiGatewayEndpoint) ? im confusion#2021-07-1615:42Joe LaneAh. We are talking about totally different things.#2021-07-1615:43Joe LaneI thought you were having problems connecting via the datomic client. Is the datomic client connecting correctly?#2021-07-1616:18Tatiana KondratevichConnecting via datomic client does not work. I keep getting a "Execution error (ExceptionInfo) at datomic.tools.ops.aws/invoke!" error even though my user has been configured correctly. ClientApiGatewayEndpoint seems to be working when accessing with curl.
The thing is, I'm trying to follow an updated version of Datomic cloud documentation that was released couple of days ago. I thought trying https://docs.datomic.com/cloud/ions/ions-reference.html#invoke-web-service might be useful but curl request fails with 502 Bad Gateway error.
Not sure if I'm doing it the right way:)#2021-07-1616:54Joe LaneI'm assuming you're using the recently released 884-9095 version of cloud.
We create 2 API gateways for you (unless you explicitly chose not to when when deploying the cloudformation templates).
The 1st API Gateway is for the datomic client, and it replaces the access-gateway / socks proxy. This is the ClientApiGatewayEndpoint output in cloudformation. You should put that url in your client config map:
{:server-type :ion
:region "eu-central-1"
:system "*****"
:endpoint ""
}
You will NOT be able to curl this endpoint and get a meaningful response (this wouldn't be secure).
Separately, the 2nd API Gateway we create is an API Gateway for your ions (which you may or may not have created already, I'm not sure what your exact situation is.)
Here are https://docs.datomic.com/cloud/tutorial/client.htmlwalking through how to set up your client and this https://docs.datomic.com/cloud/operation/howto.html#template-outputs describes how to get your ClientApiGatewayEndpoint#2021-07-1617:13Tatiana KondratevichJoe, thank you so much! It's clearer now.
No, I haven't created my ions yet, trying to check and configure everything I can before doing that. So once deployed, my ions will be accessible through this IonApiGatewayEndpoint, right? Do I require any specific AWS IAM policy to have access?
Really appreciate your help!#2021-07-1617:21Joe LaneThe IonApiGatewayEndpoint is on the internet, so however you want to auth that is up to you. But no, that IonApiGatewayEndpoint does not require any IAM policy.
The ClientApiGatewayEndpoint requires the same AWS IAM policy as before this release (to secure access to your database).#2021-07-1617:27Tatiana KondratevichGot it, thank you so much!#2021-07-1617:28Joe LaneHave fun, reach out if you run into anything else!#2021-07-1715:34Greg ReichowTrying to upgrade a solo system to 884-9095. It is failing upgrading the compute group at this step:
EntryRecordSet CREATE_FAILED [RRSet of type A with DNS name http://entry.datomic.us-west-2.datomic.net. is not permitted because a conflicting RRSet of type CNAME with the same DNS name already exists in zone http://datomic.us-west-2.datomic.net.]
I am wondering if this is connected the new endpoint changes? My system name is "Datomic" (yes, not great), does this need to be a unique name given that it appears it is trying to create a DNS record based on this name in the AWS region?#2021-07-1804:25onetomI just created a system called dev-dcs, so I have a private hosted zone, called , which has 2 A records:
1.
2.
In a previous version of Datomic cloud, only the 1st record was defined, BUT as a CNAME
After a cursory look, neither in previous Datomic Storage or Solo compute CloudFormation stacks are any AWS::Route53::RecordSet definitions.
There is however a DeleteRecordSets lambda in https://s3.amazonaws.com/datomic-cloud-1/cft/732-8992/datomic-solo-compute-732-8992.json, which seems to clean up CNAME records.
So my bet is that u should just delete your current compute group and create one with the new version. You will have a few minutes down-time. Not sure how long it takes to delete a compute stack, but it took literally ~5minutes to create the new version.
(I haven't read the migration guide yet, but I bet this would work)#2021-07-1810:28onetomIt takes 9 minutes to delete the 884-9095 compute stack. 5 minutes of that is spent with waiting for the autoscaling group to get deleted...
would be nice to shorten that time somehow... :/#2021-07-1814:58Greg ReichowThanks for the very detailed response; this solved the issue. Greatly appreciated!{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-07-1813:16xlfeUsing the Entity api, if I call get on a ref attribute I expect to get the referenced Entity (or a set of the same), but seemingly when the ref attribute points to an Entity with a db/ident I'm just getting a keyword (the ident) instead of an Entity object.
This appears at odds with the Entity documentation though?
> If an attributes points to another entity through a cardinality-many attribute, get will return a Set of entity instances.
Presumably, I could call db on the original entity and then d/entity again on the keyword/ident, but is that the suggested method?#2021-07-1912:14favilaThis is by design. The documentation you quote is just imprecise. This behavior is so “enum entities” are easier to work with. Your workaround is fine. Note the d/entity-db function which may make your implementation cleaner. The pull api does not do this so consider using it instead.#2021-07-1923:30xlfethanks @U09R86PA4 - will note your answer on my http://forum.datomic.com post and suggest an addition to the docs 🙂#2021-07-1918:57jacksonHowdy all, is there anyone specific I can DM or email about licensing?#2021-07-1919:00jarrodctaylorOf course. You can email {:tag :mailto:salescognitect.comsalescognitect.com, :attrs nil, :content nil}{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-07-1921:08kennyWith the new client api gateway endpoint, what happens when a request times out by hitting the api gateway timeout and the passed in :timeout is greater than the 30s max? Does Datomic know its not worth continuing that work so it stops doing it? Should all our application code be aware of if it is using the client api gateway and automatically cap :timeout to 30s?#2021-07-2118:57stuarthallowayThis sounds like a performance optimization. Is there a performance problem?#2021-07-2119:23kennyProbably, but still curious on the answer to the q since it allows us to prioritize where eng effort goes.#2021-07-2118:57stuarthallowayThis sounds like a performance optimization. Is there a performance problem?#2021-07-2015:29hoynkCan anyone point me to some resources on how to model data using Datomic/Datascript, EAVs in general? Specially if comparing to a relational model.#2021-07-2015:32Alex Miller (Clojure team)https://docs.datomic.com/cloud/livetutorial/datoms.html{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-07-2016:28jjttjjhttp://www.learndatalogtoday.org/ is what made it all click for me @boccato{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-07-2017:28hadilsI would like to deploy a web app with http-direct. Where should I deploy my static assets?#2021-07-2020:51jarrodctaylorCommonly I deploy a static site and associated resources through S3|Route53|CloudFront which communicates via http with my api that lives behind the api gateway http-direct endpoint.#2021-07-2020:54hadilsThanks @U0508JRJC! Very helpful!#2021-07-2112:02Tatiana KondratevichHey everyone!
We're creating a new storage + compute stack using 884-9095 version for both of them.
After creation was completed, I've configured push via github actions, which works as intended and everything gets to CodeDeploy. While processing it in CodeDeploy gets to ValidateService event and then fails with ScriptTimedOut(Script at specified location: scripts/deploy-validate failed to complete in 300 seconds).
I've checked datomic system's logs and I noticed this error ":datomic.cloud.cluster-node/-main failed: Syntax error macroexpanding at (core.clj:152:3)." which apparently has been there since system's creation. In addition, I've received a TargetTracking-table Alarm in Cloudwatch directly after creating the stack.
Would be grateful for any hints on this#2021-07-2112:23jarrodctaylorHave you run the application locally?#2021-07-2113:01Tatiana Kondratevich@U0508JRJC Yes. But I used dev-local structure when run application locally.#2021-07-2113:05jarrodctaylorThat should be fine. I just know I’ve burned myself before by assuming a change was correct and not actually running it before deploying and then wondering why I was trying to debug a timed out deploy. So wanted to start there.
My next question is were there any dependency overrides reported with pushing? If so did you update you deps files to use those and again try to run locally?#2021-07-2114:04Tatiana KondratevichYes, there were conflicts in dependency overrides. Updated clojure version and added all other dependencies to my deps.edn file/ Works locally but github actions still fail. This recently added dependency is causing troubles com.fasterxml.jackson.core/jackson-core {:mvn/version "2.10.1"}. The error is Execution error (ClassNotFoundException) at jdk.internal.loader.BuiltinClassLoader/loadClass (BuiltinClassLoader.java:581).
com.fasterxml.jackson.core.util.JacksonFeature#2021-07-2114:25jarrodctaylorWhere is that error reported?#2021-07-2114:28Tatiana Kondratevich@U0508JRJC on step running: clojure -A:ion-dev '{:op :push}' in github action.#2021-07-2114:40jarrodctaylorWe are working to keep dependencies up to date but your feedback is helpful here. Would you open a https://support.cognitect.com/hc/en-us/requests/new and include cloud versions and your deps? If you have a minimal application that exercises those deps as well that would be excellent.#2021-07-2114:59Tatiana KondratevichI fixed my github action deploy. I added com.fasterxml.jackson.core/jackson-annotations {:mvn/version "2.10.1"} and com.fasterxml.jackson.core/jackson-databind {:mvn/version "2.10.1"} in addition to com.fasterxml.jackson.core/jackson-core {:mvn/version "2.10.1"}
In this moment I don't see another conficts and push going right.
But I still get error in CodeDeploy which I explained earlier(ScriptTimedOut in ValidateService). Do you have any idea how I can fix this?#2021-07-2115:08jarrodctaylorI am not clear on the specifics of what troubles you have had with github actions. Can we remove that from the equation here? Interested in helping to resolve the deps issues and opening a ticket would be a good way to help us enumerate version issues users are running into and to track progress towards resolving this specific problem.#2021-07-2115:19kenny@U028H6X0KRS post the full stacktrace#2021-07-2116:28jaret@U028H6X0KRS what version of ion/ion-dev are you using?#2021-07-2119:42Tatiana Kondratevich@U0508JRJC I send ticket how you recommended.
I remove github actions from equation. And try use push and deploy command from my terminal. Unfortunately, Code Deploy still fails.
@U083D6HK9 I mentioned the content of the error earlier:
ValidateService event and then fails with ScriptTimedOut(Script at specified location: scripts/deploy-validate failed to complete in 300 seconds).
If you tell me where I can find something more detailed, I will gladly provide it to you.
@U1QJACBUM we use {com.datomic/ion-dev {:mvn/version "0.9.290"}#2021-07-2119:43jaret@U028H6X0KRS and what version of ion?#2021-07-2119:48Tatiana Kondratevich@U1QJACBUM I assume you mean this com.datomic/ion {: mvn/version "0.9.50"}?#2021-07-2112:51wcfHey guys,
Is there an easy way or tool to migrate data from Datomic to MongoDB? Datomic is running on AWS and wanted to migrate the data to DocumentDB (AWS's MongoDB implementation)#2021-07-2115:38hadilsHi! Is there a startup hook in datomic cloud? I need to run a mount start somewhere. I can put it in the http direct middleware but it would be cleaner to use a startup hook. #2021-07-2212:51Geoffrey GaillardHi!
I restarted my dev laptop this morning and the peer can’t connect to the transactor anymore. I get an ActiveMQ error Error communicating with HOST localhost on PORT 43304. This says it’s a configuration problem: https://docs.datomic.com/on-prem/operation/deployment.html#peer-fails-to-connect
The configuration hasn’t changed in months, files didn’t move, rights didn’t change, and my laptop got restarted and updated many times since then. But all of a sudden it stopped working today.
So there’s def. something that changed and I don’t see it.
What could cause the peer to fail connecting to the transactor if configuration and access rights are not the issue?#2021-07-2213:23Geoffrey GaillardTurns out the issue is not related to Datomic but was caused by the terminal process from which I was starting the transactor. The buggy state survived reboots.#2021-07-2213:29Geoffrey GaillardTook me 7 hours. I’ll get coffee 😭{:tag :div, :attrs {:class "message-reaction", :title "smile"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😄")} " 2")}
#2021-07-2215:03RyanHello! Quick question: trying to write some tests and was wondering if anyone knew of a pre-existing spec for datomic schema data?#2021-07-2220:30Lennart BuitThere is https://github.com/alexanderkiel/datomic-spec#2021-07-2301:08RyanThank you! 🙂#2021-07-2300:46fabraoHello all, what is the problem with that?
(d/q '[:find ?e ?id :where [?e :user/id ?id]] (-> @system :system.component.datomic/db))
; Execution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:57).
; :db.error/invalid-data-source Nil or missing data source. Did you forget to pass a database argument?
#2021-07-2300:47fabrao#:datomic{:conn #object[datomic.peer.LocalConnection 0x6ae42d2d "#2021-07-2301:02jaret@fabrao Looks like you are missing a DB.#2021-07-2301:03jaret(def db (d/db conn))
(d/q '[:find ?e ?id :where [?e :user/id ?id]] db)
#2021-07-2301:04jarethttps://docs.datomic.com/on-prem/query/query-executing.html#querying-database#2021-07-2313:58denikHow does one exclude values in a collection binding? this does not work:
(db/q '[:find [?m ...]
:in $ [?ignore-ids ...]
:where
[?m :mom-rec-id ?mid]
(not [?m :mom-rec-id ?ignore-ids])]
#{"ignored" "ids"})#2021-07-2314:01refsetHave you tried a contains? predicate like this instead of the not clause: [(clojure.core/contains? ?ignore-ids ?mid)] ?#2021-07-2314:02denikyes using functions works, I also passed in the complemented set and used it as a function#2021-07-2314:03denikjust surprised that there doesn’t seem to be an idiomatic way to do negation#2021-07-2314:03denikalso since I’m using the collection binding the contains? code would actually break#2021-07-2314:31refsethmm, I think you want to use Datalog-native unification like this then
(db/q '[:find [?m ...]
:in $ [?ignore-ids ...]
:where
[?m :mom-rec-id ?mid]
[(!= ?m ?ignore-ids)]
#{"ignored" "ids"})#2021-07-2317:45souenzzoI had a issue with this kind of queries and end up writing:
:in $ ?ignore
:where
[?m :mom-rec-id ?mid]
[(contains? ?ignore ?mid) ?q]
[(ground false) ?q]
https://gist.github.com/souenzzo/c7b5a5434d4c04efcc58802c81b46023#2021-07-2317:46denik@U899JBRPF tried that and it didn’t work for me#2021-07-2317:46denikcould also be a bug in datascript#2021-07-2318:09refsetoh, yeah, maybe datascript doesn't support it...I was looking at Crux's tests for inspiration 😅 https://github.com/juxt/crux/blob/065db71c9f6f4c3a3c2d2eb75916bb15c719f75b/crux-test/test/crux/query_test.clj#L1008-L1011 - I can't see an equivalent test in datascript's suite, and looking at the source it seems datascript maps != to not= which is not really the same thing imo
I guess that kind of unification logic is ~impractical without a query planner
(sorry for the noise!)#2021-07-2613:38jaretFYI I forgot to cross post our recent on-prem release here: https://twitter.com/datomic_team/status/1418601460900802574?s=21#2021-07-2614:17wcfHey guys,
Is there an easy way or tool to migrate data from Datomic to MongoDB? Datomic is running on AWS and wanted to migrate the data to DocumentDB (AWS's MongoDB implementation)#2021-07-2618:33RyanProbably not, as Datomic isn't a document database, and Mongo / DocumentDB isn't equipped to handle Datomic data directly, you'll have to decide how to map the data down.#2021-07-2620:01rgorrepatiIs there a way to log(like log4j, not datomic log) the transactions to console? I want to log the input to d/transact. The codebase is already setup to use slf4j.#2021-07-2622:53jaret@U381B296Z have you looked over the log api? Is this what you are after? You can see the record of all transaction data in historic order. If I were in your shoes and needed the information you want I would use the log api to review transactions. https://docs.datomic.com/on-prem/api/log.html#2021-07-2713:38pyryIf I understood correctly, you want to log the input you're passing from your application to d/transact, correct? If that's the case, you could consider doing something like (alter-var-root! d/transact! fn-where-you-log-before-transact)#2021-07-2916:46rgorrepatiYeah. i was trying to see if i can log it wihtout modifying the source, like by, changing the log level of some ns in datomic stack#2021-07-2814:17RyanDoes anyone have any leads or strategies for best practice surrounding merging entities? e.g. Entity A and Entity B, later facts reveal that A is B.#2021-07-2816:42JohnJTo clarify, Ions code now runs on Java 11 correct?#2021-07-2817:59souenzzo@jjaws the changelog say yes. But I agree with you that it could be an explicit information.
https://docs.datomic.com/cloud/changes.html#884-9095{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-07-2820:04JohnJis the dev-tools download link not working?#2021-07-2820:21Joe LaneI just tried it, at first gmail didn't want to download the link, but after waiting a few seconds it worked just fine. Does clicking the maven configuration page work for you @jjaws?#2021-07-2820:22Joe LaneThe link is a presigned URL specific to your email.#2021-07-2820:30JohnJIt's working for me now, thx#2021-07-2820:46JohnJLooks like 0.9.235 has not been pushed to maven#2021-07-2820:46JohnJ0.9.232 does download#2021-07-2820:57Joe LaneAre you looking for dev-local or REBL?#2021-07-2821:00Joe Lanedev-local it seems.#2021-07-2821:00JohnJyes#2021-07-2821:01Joe LaneThanks for the heads up{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-07-2915:26redingerHI @jjaws You were correct that dev-local 0.9.235 was not pushed to Maven. I’ve just pushed it, so you should be able to get it now.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-07-2907:11zendevil.ethHow do I use Datomic on gcp? #2021-07-3114:08pedrorgirardiI suppose you have to run Datomic On-prem https://www.datomic.com/on-prem.html#2021-07-2907:37zendevil.ethI want to run it on a gke pod #2021-07-2920:07rgorrepatiwhat is the difference between transact and transact-async when I get similar timings on the same list of txs
(time (d/transact (conn) txs)) ;; 8369 ms
(time (d/transact-async (conn) txs)) ;;8455 ms
#2021-07-2920:09Joe LaneHow many txs are you attempting to transact there?#2021-07-2920:10rgorrepati10k, in a in-memory db.#2021-07-2920:12Joe LaneAnd what is the timing if you run:
(let [conn (conn)]
(time (d/transact conn txs))
(time (d/transact-async conn txs)))#2021-07-2920:13rgorrepati"Elapsed time: 8887.066625 msecs"
"Elapsed time: 8879.807292 msecs"#2021-07-2920:13favilad/transact derefs the future with a system-property-controlled timeout, then returns the future. It throws if the timeout is reached before the future resolves. d/transact-async returns the future without waiting, and lets you control deref and timeout.#2021-07-2920:14favilawhat you are seeing is maybe the memory db doesn’t actually do work in another thread.#2021-07-2920:15favilaIMO d/transact should only be for convenience in the repl; d/transact-async should be in production code.#2021-07-2920:28rgorrepatiMakes sense. I just tested it with a remote db and I get
"Elapsed time: 125.154958 msecs"
"Elapsed time: 0.808292 msecs"#2021-07-2920:28rgorrepatifor 5 txs#2021-07-2920:29rgorrepatiThank you :thumbsup:#2021-07-2920:30favilanote that you should always deref the result of d/transact-async. That’s the only way you will know if the transaction succeeded or not#2021-07-2920:31rgorrepatiI wish there was a callback we could attach to it. I guess there is no way other than to maintain our threadpool of blocking threads to deref those async txs#2021-07-2920:39favilaconsider using manifold: (manifold.deferred/on-realized (manifold.deferred/->deferred fut) success-cb error-cb)#2021-07-3015:09Daniel Jomphe👋 Hi! Does Datomic change how we seed app-supporting data when we develop new features in an app?
Normally we'd use some way of migrating schema and data to support those new features, without re-seeding the entire schema and app-level data. We'd probably track in a DB property at which version of the seed we're at.
I know there are non-breaking ways of evolving the Datomic schema, but I suppose there are no such ways of evolving data seeds that can be wholly re-transacted idempotently.
In other words if we run the data seed many times, we'll end up with multiple copies of the e.g. admin user, and other business-related entities, etc.
So, does Datomic have any properties that help us in this area? Thanks!#2021-07-3015:31tvaughanI wrote something to handle this.
(defn tx!
[conn data]
(transact conn {:tx-data data}))
(defn tx-resource!
[conn resource]
(tx! conn (resources/read-resource resource)))
(defn- tx-status
[conn tx-id]
(-> (conn->db conn)
(q-by-ident [:tx/id tx-id] [{:tx/status [:db/ident]}])
:tx/status
:db/ident))
(defn- tx-apply!
[conn {:keys [tx-id tx-data]}]
(tx! conn (conj tx-data {:tx/id tx-id :tx/status
:tx.status/applied})))
(defn- tx-applied?
[conn tx-id]
(case (tx-status conn tx-id)
:tx.status/applied true
nil))
(defn tx-idempotent!
[conn resource]
(let [{:keys [tx-id tx-data tx-kind] :as props} (resources/read-resource resource)]
(when-not (tx-applied? conn tx-id)
(case tx-kind
:fn (do
(require (symbol (namespace tx-data)))
(tx-apply! conn (assoc props :tx-data ((resolve tx-data) conn))))
(tx-apply! conn props)))))
On start-up, we always transact this (not idempotent):
[{:db/ident :tx/id
:db/cardinality :db.cardinality/one
:db/valueType :db.type/keyword
:db/unique :db.unique/value}
{:db/ident :tx/status
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :tx.status/applied}]
Then our schemas and seed data get wrapped like:
{:tx-id :migration-0001
:tx-data
[{:db/ident :editor-session/pid
:db/cardinality :db.cardinality/one
:db/valueType :db.type/string
:db/unique :db.unique/value}]}
Which are transacted on the command-line (idempotent) and can be added to a start-up script, like a systemd unit{:tag :div, :attrs {:class "message-reaction", :title "metal"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "metal", :src "https://emoji.slack-edge.com/T03RZGPFR/metal/9f936a4278.png"}, :content nil})} " 1")}
#2021-07-3015:40Daniel JompheWow, this does seem simple, solid and easy! Thanks for the answer!{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-07-3015:45Daniel JompheSo, thinking about it, I see that Datomic does help by the fact that transactions operate on data that can be easily pre-processed before being committed. This solution ends up being generic and very concise.#2021-07-3113:08hdenHi, I’ve made a duct module for this purpose. It use ragtime to implement the same pattern.
https://github.com/hden/duct.module.datomic#integrant-keys#2021-07-3113:24Daniel JompheHi, I wasn't aware there exists a migrator adapted to Datomic Cloud, thanks for sharing this!{:tag :div, :attrs {:class "message-reaction", :title "smile"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😄")} " 2")}
#2021-07-3020:16Drew VerleeI'm trying to troubleshoot why i can't connect to datomic cloud:
➜ datomic-cli git:(master) ✗ ./datomic-access client grow
download: to ../../.ssh/datomic--grow-bastion
download: to ../../.ssh/datomic--grow-bastion.hostkey
system grow
REGION_ARG
PR
s3: grow-storagef7f305e7-qyyc6pldu9b0-s3datomic-1cz3x1k0cdzby
pk: /home/drewverlee/.ssh/datomic--grow-bastion
ip: None
hk: /home/drewverlee/.ssh/datomic--grow-bastion.hostkey
ssh marker
SSH -o IdentitiesOnly=yes
OpenSSH_8.2p1 Ubuntu-4ubuntu0.2, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 20: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 22: Applying options for *
ssh: Could not resolve hostname none: Temporary failure in name resolution
The shell command that datomic-acess builds is
ssh -v -o UserKnownHOstsFile=/home/drewverlee/.ssh/datomic--grow-bastion.hostkey -o IdentitiesOnly=yes -i /home/drewverlee/.ssh/datomic--grow-bastion -CND 8182
The lat part /cdn-cgi/l/email-protection would seem to be the issue as I assume "None" should be an IP fetched by the script function gateway_ip.
I looked at the gatewate_ip function and it's being given the only required arg which is the system, so i'm not sure why it would return none. Any ideas?#2021-07-3020:25Joe Lane@U0DJ4T5U1 I'm not sure what your datomic-access script is doing but the access command should be
datomic client access <system> per https://docs.datomic.com/cloud/getting-started/get-connected.html#access-gateway
HOWEVER!! that only matters if you're not on the latest release. If you're on the latest release this changes. Let me know if I can help more.#2021-07-3020:56Drew VerleeAh, ok. I am on the latest. I'll recheck the docs. I thought I caught everything. I'll look tomorrow :)#2021-07-3021:31Joe LaneThe access gateway doesn't exist in latest 🙂 Enjoy#2021-07-3022:13Daniel Jomphe...but you need to update your client config endpoint url with the new url.#2021-07-3107:36weijust a general data modeling question, now that composite keys are usable, is it still best practice to use uuids as primary keys everywhere? or is it better to use composite keys and not have the uuid? in my particular case, I have multiple projects (`:project/uuid`) but each project has its own tokens numbered starting from 0, so the tokens have the non-unique attribute :token/id and the unique composite attribute :token/project+id . is that best practice or should the tokens just have a :token/uuid attribute for its primary key?
separate question, should the token's composite key be project ref + token id (a long) or should it be project uuid + token id?#2021-07-3113:23hdenIt really depends on your use case.
> each project has its own tokens numbered starting from 0
Assuming that it’s a long because you need to sort the tokens by primary key, a sequential unique key. might be a good alternative.
UUID v6 (Draft)
https://datatracker.ietf.org/doc/html/draft-peabody-dispatch-new-uuid-format
CUID
https://github.com/hden/cuid{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 4")}
#2021-07-3114:13zendevil.ethI have the following Dockerfile:
FROM alpine:latest
RUN apk add wget
ARG VERSION
RUN wget --continue --tries=0
However, I’m getting this error upon running it:
docker build . -f Dockerfile_db -t humboi/database --build-arg VERSION=1.0.6316
#6 748.3 2021-07-31 13:56:30 (127 KB/s) - Connection closed at byte 96492108. Retrying.
#6 748.3
#6 749.3 --2021-07-31 13:56:31-- (try: 2)
#6 749.3 Connecting to ()|52.217.90.44|:443... connected.
#6 751.6 HTTP request sent, awaiting response... 403 Forbidden
#6 751.6 2021-07-31 13:56:33 ERROR 403: Forbidden.
#6 751.6
------
executor failed running [/bin/sh -c wget --continue --tries=0
#2021-07-3121:02Drew Verleehttp-direct only supports one entry point correct? I don't need it to support more, i'm just sanity checking my understanding of whats going on.#2021-08-0109:54zendevil.ethhere https://docs.datomic.com/on-prem/getting-started/connect-to-a-database.html it says “For now, you will use the “mem” storage”. And uses -d hello,datomic:. How does one use a storage that persists?#2021-08-0109:57zendevil.ethI guessed :sql: and did:
bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d humboi,datomic:
but get:#2021-08-0109:57zendevil.ethExecution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:79).
:db.error/invalid-sql-connection Must supply jdbc url in uri, or DataSource or Callable<Connection> in protocolObject arg to Peer.connect
#2021-08-0110:05zendevil.ethwhere do I get the jdbc url? and how to supply it exactly?#2021-08-0117:57favilahttps://docs.datomic.com/on-prem/overview/storage.html#2021-08-0118:00favilaDatomic peer architecture is “bring your own storage” so the jdbc url (if using a sql storage) is specific to whatever db you have set up separately. You can use the dev storage as a persisting store just for testing/experimentation or even very light production use; otherwise use one of the others#2021-08-0118:01favilaSome jdbc url examples are in the datomic.api/connect docstring, but specifics depend on the jdbc driver you are using—check it’s documentation#2021-08-0118:02favilahttps://docs.datomic.com/on-prem/clojure/index.html#datomic.api/connect#2021-08-0118:05favilaEg documentation for postgresql jdbc uri: https://jdbc.postgresql.org/documentation/80/connect.html#2021-08-0305:45zendevil.ethThis is what I get:
{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "5222203b39213a3726213a33203f331202203b39213a3726217f1f3331103d3d397f02203d"}, :content ("[email protected]")}#2021-08-0312:24favilazsh: no matches found this is an error from your shell, ? is a wildcard character. Quote the url#2021-08-0313:39zendevil.ethbin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d humboi,"datomic:"
Execution error at datomic.peer/get-connection$fn (peer.clj:661).
Could not find humboi in catalog
#2021-08-0313:50faviladid you create that database?#2021-08-0313:53favila(datomic.api/create-database "datomic:")
in a repl using the peer (not client) api https://docs.datomic.com/on-prem/peer/peer-getting-started.html#connecting#2021-08-0314:01zendevil.ethExecution error (Exceptions$IllegalArgumentExceptionInfo) at datomic.error/arg (error.clj:79).
:db.error/read-transactor-location-failed Could not read transactor location from storage
#2021-08-0314:02zendevil.ethAlso the repl route seems to be tricky to do from a dockerfile when creating a datomic image. Is there a command line way to do this?#2021-08-0314:04favilano, but bin/repl in the datomic transactor zip gives you a dead simple repl#2021-08-0314:05favilawith the right stuff on the classpath#2021-08-0314:05favilawell, maybe not postgres#2021-08-0314:06favilathis exception suggests to me that either postgres isn’t running, or the process can’t communicate with it, or the transactor isn’t running#2021-08-0314:07favilaif docker is involved my money is on the second one…#2021-08-0314:08favilawait a second--stepping back, a dockerfile shouldn’t be creating databases#2021-08-0314:09favilaThis is something you do once and never again#2021-08-0120:51Drew VerleeThis is a very useful post for anyone transitioning their solo topology to prod as part of the latest release of datomic. https://forum.datomic.com/t/experience-report-updating-from-solo-to-datomic-cloud-884-9095/1913/3
Is the "Lambda proxy" mentioned (in the above post) the lambda endpoint? https://docs.datomic.com/cloud/ions/ions-tutorial.html#configure-entry-points. Or is the lambda endpoint option still available?#2021-08-0123:01Joe LaneLambda proxy for http requests is redundant now that http-direct is available for all instances.
I know you’re working through a websockets scenario which is why said http requests above :)#2021-08-0123:27Drew VerleeShould i be using the Api gateway integration type of HTTP with "use HTTP proxy integration" with http direct? Proxy so i can get the connection id.
That makes sense to me, but my first try (sending a ws connect request) didn't seem to register (returned a 400). So i'm thinking it over 🙂#2021-08-0123:32Drew Verleehmm the http request type might need to be a GET not a POST.#2021-08-0123:41Drew VerleeIf i have to use http-direct for websockets, sense their different protocols i would need something to do translation. Given aws offers HTTP as an integration type for websockets, i assume i can just configure the endpoint as the IonAPiGateWayEndpoint.#2021-08-0200:01Drew Verleeor i'm just to impatient, and aws hadn't deployed it yet, the version without the proxy at least is getting through#2021-08-0213:56Daniel JompheHi, I'm the author of the post. Under the "old" datomic I knew how to make work regular Ion Lambdas, and Ion Lambda proxies, and didn't succeed in configuring Http Direct in a prod stack. Things at AWS had evolved very differently than the Datomic docs. Our Websockets were handled through the Ion Lambda proxies, and worked fine that way. It really helped me to use the Replion documentation to set up a remote REPL to debug it until it worked. Wscat was a very useful CLI tool to debug connections on the other side too.
When I migrated our stacks to the new Datomic, I moved our HTTP Ion Lambda proxy to become our HTTP Direct entry point, and left out the Websocket lambda proxy. I didn't try migrating it in any way since we stopped using it a few months ago, so we didn't need to make sure it still works under the new Datomic. Therefore I'm not sure how I would approach a solution for both HTTP and WS through HTTP Direct, and if AWS supports it yet.#2021-08-0214:25Drew Verlee@U0514DPR7 Thanks for the feedback. I'm finding some time here and there to set up websockets using datomic ions. I'm going to try and share my experience in case they help others. Also as a form of self promotion, i'm looking for a job 🙂
I'm nearly sure what i'll end up with is two api gateways.
The browser client would send a message to the one configured to take a websocket request (pictured above) and forward it to the HTTP url setup to the apigateway i assume datomic is creating for us.
I'm not sure under the hood what this might mean in terms of performance, i feel thats always part of the challenge with cloud and high level abstractions, the second your off the happiest of paths things get odd.#2021-08-0214:38Daniel JompheThis is the endpoint I had ionized with the previous Datomic when I made it work:
(defn wss
"Web handler that returns the websocket world."
[req]
(try
(start/start-once!)
(let [ctx (get-in req [:datomic.ion.edn.api-gateway/data :requestContext] {})
body (get-in req [:datomic.ion.edn.api-gateway/data :body])
stage (get ctx :stage "stg")
e-type (get ctx :eventType nil)
action (get ctx :routeKey :default)
connId (get ctx :connectionId nil)
_ (log/info {:op "WssProxy" :step action})]
(case action
"message" (printf "%s: Conn %s: Event %s, Action %s, Body: %s" stage connId e-type action body)
"$connect" (printf "%s: Conn %s: Event %s, Action %s, Body: %s" stage connId e-type action body)
"$disconnect" (printf "%s: Conn %s: Event %s, Action %s, Body: %s" stage connId e-type action body)
"$default" (printf "%s: Conn %s: Event %s, Action %s, Body: %s" stage connId e-type action body)
(printf "IMPOSSIBLE DEFAULT: %s: Conn %s: Event %s, Action %s, Body: %s" stage connId e-type action body))
(OK (:body req "This response value isn't idiomatically required for the Websocket World
unless we activate the integration response in AWS API Gateway!")))
(catch Throwable t
(log/alert {:op "SystemStartupError" :ex t}) ; TODO separate startup vs handling
(throw t))))#2021-08-0214:39Daniel JompheWith the new datomic it would also work except those (get-in req [...]) in the let because you will no more use the ionize function to wrap this handler.#2021-08-0214:40Daniel JompheSo from your WS Gateway to your HTTP Gateway, you should make sure you transport the action and connectionId that the HTTP endpoint will have to use.#2021-08-0214:41Daniel JompheWe liked how API Gateway maintained WS connections and translated them to HTTP calls to our handler, so that our app wouldn't almost need to know that it was responding to WS clients.#2021-08-0214:43Daniel JompheWith that said, I expect your WS Gateway will be configured thusly:#2021-08-0214:45Daniel JompheThis is the way that AWS will manage connections for you. But I hope I'm not misleading you. You seem to know some more than me where you plan to go!#2021-08-0214:49Drew Verleethanks.
Yea, the part i'm working on now is that translation. The aws docs are clear on what to do in general, but i'm not sure what the request template should be. This setup (see pic) gives me a request with a body of "http://java.io/cdn-cgi/l/email-protection", which i can probably handle in the app, i'm trying the other way to do content handling (convert to text-> convert to binary) before i do more investigation. as its faster to just try.
I have a rudimentary understanding of what i'm doing. I need to be more certain what datomic is doing for me. I assume its setting up an http api gateway, aws seems to allow for websocket apis to pass to http endpoints, but how does that compare to using the lambda? I assume they both just redirect, but one has loadbalancing? Would that mean i could loadbalance at both points?#2021-08-0214:52Daniel JompheI hope someone more knowledgeable in this area steps in. 🙂#2021-08-0216:32Drew Verleethere exist no examples online of someone using the API-type=websockets-API with the Integration-type=http.
My favorite part is the docs are like on it are like "select X if you want X" oh, thanks, thats really useful aws. Then the link in a circle, its like the links are trying to pass the buck.#2021-08-0216:45Drew Verleemy next step is to verify that i can't use the ion-config > lambdas option with lambda proxy. like everyone else on the planet 🙂#2021-08-0217:59Drew Verleeyep, lambda proxy works just fine.#2021-08-0219:44Drew Verleehm, ok. so clearly my http gateways were doing anything because i passed them the and my ClientApiGatewayEndpoint and not the IonApiGatewayEndpoint.#2021-08-0220:13Drew VerleeOk so this setup for http does work, I got it right the first time, but i didn't note that content handling was passthrough. Every other selection seems to result in the request not making it to the application.
Once you select and save something other then passthrough it wont let you select it again!#2021-08-0220:32Drew Verleehazza, ok, i just have to slurp the body of the request which is the Buffered input stream#2021-08-0312:55Daniel Jomphe🎉 And thanks for the guide you wrote here: https://forum.datomic.com/t/websocket-guide-wip/1916#2021-08-0320:07Drew Verleethanks @U0514DPR7.
And thanks for your feedback. I'm curious is sufficient to just slurp the request body? Asking because i'm getting an error and the cause is "stream closed". see full error below. Honestly, i have avoided thinking about some of the finer points of io and just defaulted to using slurp when ever possible, which seems to be nearly always :0
{
"Msg": "IonHttpDirectException",
"Ex": {
"Via": [
{
"Type": "java.io.IOException",
"Message": "Stream closed",
"At": [
"java.io.BufferedInputStream",
"getBufIfOpen",
"BufferedInputStream.java",
176
]
}
],
"Trace": [
[
"java.io.BufferedInputStream",
"getBufIfOpen",
"BufferedInputStream.java",
176
],
[
"java.io.BufferedInputStream",
"read",
"BufferedInputStream.java",
342
],
[
"sun.nio.cs.StreamDecoder",
"readBytes",
"StreamDecoder.java",
284
],
[
"sun.nio.cs.StreamDecoder",
"implRead",
"StreamDecoder.java",
326
],
[
"sun.nio.cs.StreamDecoder",
"read",
"StreamDecoder.java",
178
],
[
"java.io.InputStreamReader",
"read",
"InputStreamReader.java",
181
],
[
"java.io.BufferedReader",
"fill",
"BufferedReader.java",
161
],
[
"java.io.BufferedReader",
"read1",
"BufferedReader.java",
212
],
[
"java.io.BufferedReader",
"read",
"BufferedReader.java",
287
],
[
"java.io.Reader",
"read",
"Reader.java",
229
],
[
"$fn__11564",
"invokeStatic",
"io.clj",
337
],
[
"$fn__11564",
"invoke",
"io.clj",
334
],
[
"clojure.lang.MultiFn",
"invoke",
"MultiFn.java",
239
],
[
"$copy",
"invokeStatic",
"io.clj",
406
],
[
"$copy",
"doInvoke",
"io.clj",
391
],
[
"clojure.lang.RestFn",
"invoke",
"RestFn.java",
425
],
[
"clojure.core$slurp",
"invokeStatic",
"core.clj",
6956
],
[
"clojure.core$slurp",
"doInvoke",
"core.clj",
6947
],
[
"clojure.lang.RestFn",
"invoke",
"RestFn.java",
410
],
[
"tomatto.backend.datomic.ion.websocket$http_handler",
"invokeStatic",
"websocket.clj",
28
],
[
"tomatto.backend.datomic.ion.websocket$http_handler",
"invoke",
"websocket.clj",
26
],
[
"clojure.lang.Var",
"invoke",
"Var.java",
384
],
[
"datomic.ion.http_direct$invoke_ion",
"invokeStatic",
"http_direct.clj",
79
],
[
"datomic.ion.http_direct$invoke_ion",
"invoke",
"http_direct.clj",
72
],
[
"datomic.ion.http_direct$processing_callback$fn__11670",
"invoke",
"http_direct.clj",
91
],
[
"cognitect.http_endpoint.Endpoint$fn__13285$fn__13286$fn__13287",
"invoke",
"http_endpoint.clj",
181
],
[
"clojure.core$binding_conveyor_fn$fn__5773",
"invoke",
"core.clj",
2034
],
[
"clojure.lang.AFn",
"call",
"AFn.java",
18
],
[
"java.util.concurrent.FutureTask",
"run",
"FutureTask.java",
264
],
[
"java.util.concurrent.ThreadPoolExecutor",
"runWorker",
"ThreadPoolExecutor.java",
1128
],
[
"java.util.concurrent.ThreadPoolExecutor$Worker",
"run",
"ThreadPoolExecutor.java",
628
],
[
"java.lang.Thread",
"run",
"Thread.java",
829
]
],
"Cause": "Stream closed"
},
"Type": "Event",
"Tid": 111,
"Timestamp": 1628020795559
}
#2021-08-0320:11Drew Verleehmm, is the error telling me that by the time i try to slurp the body/stream its already close? maybe you can only slurp once for reasons.#2021-08-0320:14Joe Lane@U0DJ4T5U1 That's standard InputStream behavior https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/io/BufferedInputStream.html#close()#2021-08-0320:15Drew Verleefair enough. i'm surprised i never ran into that before.#2021-08-0320:15Drew Verleethanks!#2021-08-0213:02joshkhdid the 884-9095 upgrade happen to somehow override the /ping REST resource for a deployed HTTP ion? prior to the upgrade my API had a /ping route with a custom response. after the upgrade i just get back Healthy Connection#2021-08-0214:48Daniel JompheSeems so! Here, our custom ping resource is behing an e.g. /some-path/custom-ping URL so I didn't remark the potential change, but I now see that /ping responds just like you do.#2021-08-0218:38joshkhthanks for confirming @U0514DPR7. i'm glad i wasn't using our custom response for anything important 😉{:tag :div, :attrs {:class "message-reaction", :title "slightly_smiling_face"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙂")} " 3")}
#2021-08-0220:30redingerThat’s interesting that this wasn’t an issue until after you upgraded to the new release.
AWS documents the /ping path as reserved for API Gateway for their own health checks. https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-known-issues.html
(Although, they don’t mention the HTTP API in that page, it is also the case they use that path there as well)#2021-08-0214:47ennI’m curious how other Datomic users handle the need for aggregating counts of entities efficiently. I’m thinking of things like “I want to know the number of albums in every category”, where the numbers of both albums and categories are large and can grow without bound.
Do you just do aggregation and count from scratch every time you need to report this info? Do you store it? If you store it, do you use something like transaction functions to keep it up-to-date?
We are finding that where there are many possible state transitions it can become costly in engineering time to ensure that these aggregate values are always updated correctly.#2021-08-0400:46xlfeWe're using either the datoms api or the index api which are very quick. The key for us has been having the right attributes indexed from the start and using the low level accesses#2021-08-0306:46zendevil.ethHow do I connect to the postgres storage in datomic?
I ran the following commands:
psql -f bin/sql/postgres-db.sql -U postgres
psql -f bin/sql/postgres-table.sql -U postgres -d datomic
psql -f bin/sql/postgres-user.sql -U postgres -d datomic
#2021-08-0306:47zendevil.ethAnd when I run:
bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d humboi,datomic:
I get:
[1] 38331
zsh: no matches found: humboi,datomic:
#2021-08-0313:52jaret@U01F1TM2FD5 you should run a transactor against your postgres storage. It looks like you are running peer-server (which you can do once you have a transactor up and running and a DB created for the peer-server to serve). In addition to https://docs.datomic.com/on-prem/overview/storage.html#sql-database I have an https://jaretbinford.github.io/SQL-Storage/ of getting Postgres and MySQL storage up and running.#2021-08-0313:53jaretI hope that helps. Shoot me a support e-mail at <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> if you run into any issues 🙂#2021-08-0314:15zendevil.ethSo this is how I’m running the transactor:
bin/transactor config/samples/sql-transactor-template.properties#2021-08-0314:16zendevil.ethAnd it starts:
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc driver ...
System started datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc driver
#2021-08-0314:16zendevil.ethThis is the config:
1 protocol=sql
2 host=localhost
3 port=8998
4
5
6
7 ###################################################################
8 # See
9
10 license-key=foobar
16
17
18 ###################################################################
19 # See
20
21 sql-url=jdbc:
22 sql-user=datomic
23 sql-password=datomic
24
25 ## The Postgres driver is included with Datomic. For other SQL
26 ## databases, you will need to install the driver on the
27 ## transactor classpath, by copying the file into lib/,
28 ## and place the driver on your peer's classpath.
29 sql-driver-class=org.postgresql.Driver
#2021-08-0314:18zendevil.ethThis is the config and the conn:
10 ;; as environment variables
9 (defn cfg [] {:server-type :peer-server
8 :access-key "myaccesskey"
7 :secret "mysecret"
6 :endpoint "datomic:"
5 :validate-hostnames false})
4
3 (def *conn
2 "Get shared connection."
1 (delay (d/connect (d/client (cfg)) {:db-name "humboi"})))
18
But the transactions don’t seem to be working#2021-08-0314:24zendevil.ethGives invalid connection config#2021-08-0314:25zendevil.ethI tried “localhost:8998” too for endpoint but that didn’t work either#2021-08-0315:14jaretYou are trying to connect via peer-server. You need to connect and create a DB in order to be able to serve a DB.#2021-08-0315:15jaretlaunch a peer against the transactor (i.e. a REPL), use the peer library to create a DB like:#2021-08-0315:15jaret(require '[datomic.api :as d])
(def uri "datomic:")
(d/create-database uri)
#2021-08-0315:15jaretThen you can standup your peer-server against that DB.#2021-08-0315:15jaretAnd you'll have the endpoint for your config map#2021-08-0315:20zendevil.ethPlease help me. So I have this web server. Should it use the peer or the client library?#2021-08-0315:20zendevil.ethThere’s just a web server and the datomic server#2021-08-0315:21zendevil.ethIf I use the client library in the web server, then I have to figure out a way to create the database in the dockerfile of the datomic server right?#2021-08-0315:22zendevil.ethBut if I use the peer library, then I don’t have to create the database in the dockerfile and can create the database when the app starts?#2021-08-0315:23zendevil.ethWhat if both are running on kubernetes pods and the datomic server uses Persistent Volume Claim? If I create the server on startup everytime I deploy the cluster again, wouldn’t it overwrite what was already written in the persistent volume?#2021-08-0316:15favilacreate-database is idempotent, so it is safe to run repeatedly (it will return false and do nothing if the db already exists). That said, it doesn’t make sense to me to do is this way because it’s persistent state that’s a prerequisite to the entire system running. Just like you don’t put schemas/create-tables/create-auth, etc into the startup of postgres, it doesn’t make sense to put db creation into the startup of the transactor. (Besides an empty newly-created db is likely not usable by your application in practice anyway--it probably needs schema and some data.)#2021-08-0404:49zendevil.ethit does make sense to create a db while creating the datomic image because my web server that uses the client api cannot create a db when it goes up, so the db will be created by the datomic image upon startup and the client can then read and write on the db.#2021-08-0515:32zendevil.eth@U09R86PA4 https://clojurians.slack.com/archives/C0PME9N9X/p1628177355015100#2021-08-0313:32Tatiana KondratevichHi, all!
I'm currently following datomic-ions tutorial mentioned in documentation.
I've noticed an :allow keyword in ion-config.edn with a predicate under it. However, there's no info on it in the docs.
{:allow [datomic.ion.starter.attributes/valid-sku?]
:lambdas {:ensure-sample-dataset
{:fn datomic.ion.starter.lambdas/ensure-sample-dataset
:description "creates database and transacts sample data"}
:get-schema
{:fn datomic.ion.starter.lambdas/get-schema
:description "returns the schema for the Datomic docs tutorial"}
:get-items-by-type
{:fn datomic.ion.starter.lambdas/get-items-by-type
:description "return inventory items by type"}}
:http-direct {:handler-fn datomic.ion.starter.http/get-items-by-type}
:app-name "reltest-781-prod"}
Would be grateful if anyone could explain this for me: what this keyword is responsible for, what can be used under etc
Thanks!
Link to the repo: https://github.com/Datomic/ion-starter#2021-08-0313:37favilahttps://docs.datomic.com/cloud/ions/ions-reference.html#ion-config#2021-08-0313:37favila> :allow is a vector of fully qualified symbols naming https://docs.datomic.com/cloud/query/query-data-reference.html#deploying or https://docs.datomic.com/cloud/transactions/transaction-functions.html functions. When you deploy an application, Datomic will automatically require all the namespaces mentioned under `:allow`.#2021-08-0313:37Joe Lanehttps://github.com/Datomic/ion-starter/blob/master/siderail/validate_sku.repl#2021-08-0313:45Tatiana Kondratevich@U09R86PA4 thanks! somehow I wasn't able to find this through search bar#2021-08-0313:46Tatiana Kondratevich@U0CJ19XAM thank you{:tag :div, :attrs {:class "message-reaction", :title "100"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💯")} " 3")}
#2021-08-0314:35Daniel JompheAlso, note that the first time you use an unallowed function in a query or transaction function, you'll see an error appear, telling you that you should allow it. That's how you'll know you stepped out of the sandbox and must take action.
For me, the first time it happened was when I used a function in the clojure.string namespace!#2021-08-0316:01zendevil.ethI’m trying to install datomic peer with leiningen:
[com.datomic/datomic-pro "1.0.6316"]
in :dependencies. But:
Could not find artifact com.datomic:datomic-pro:jar:1.0.6316 in central ()
Could not find artifact com.datomic:datomic-pro:jar:1.0.6316 in clojars ()
#2021-08-0316:04favilaIt’s not in maven central but in a credentialed repository. See http://my.datomic.com for instructions for various build tools (including lein)#2021-08-0316:15zendevil.eth{#"my\.datomic\.com" {:username "
and
gpg --default-recipient-self -e \
~/.lein/credentials.clj > ~/.lein/credentials.clj.gpg
gives:
{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "2d5d5f44465e4548595e454c5f404c6d7d5f44465e4548595e00604c4e6f424246007d5f42"}, :content ("[email protected]")}#2021-08-0316:18favilagpg is looking for a device to read your passphrase from and failing. (BTW, you shouldn’t have pasted your password above){:tag :div, :attrs {:class "message-reaction", :title "joy"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😂")} " 2")}
#2021-08-0317:19zendevil.ethI don’t understand#2021-08-0317:27zendevil.ethDoing export GPG_TTY=$(tty) and then running lein repl gives:#2021-08-0317:27zendevil.ethPlease enter the passphrase to unlock the OpenPGP secret key:#2021-08-0317:27zendevil.ethWhere do I find this password?#2021-08-0317:29zendevil.ethand also how would this work in the context on running it in a dockerfile?#2021-08-0317:30zendevil.ethbecause you can’t type in a password after doing lein uberjar in a dockerfile?#2021-08-0319:33RyanIs there a way to dynamically compose d/q where clauses? I keep running into issues assembling the map because of the pesky ?e's floating around.#2021-08-0319:35favilaThe map form is easier to construct {:find [?a ?b ?c] :where [[?a :foo ?b][?b :bar ?c]]} not [:find ?a ?b ?c :where [?a :foo ?b][?b :bar ?c]]#2021-08-0319:36favila(cond-> [] condition (conj clause clause2) …) is handy#2021-08-0319:36favilaother than that it’s Just Data, what specific pain are you hitting?#2021-08-0319:40RyanThats actually what I just stumbled on#2021-08-0319:40RyanI was trying to do the conj like, inside the form instead of doing it to construct the form#2021-08-0320:12dogenpunkHas anyone run into an issue where they can connect to a cloud system, but running import-cloud throws a :not-found anomaly stating that the configured endpoint “nodename nor servname provided, or not known”?#2021-08-0320:13dogenpunkThe fact that I’m running an older version of datomic cloud may likely have an effect here, but upgrading to the latest dev-local doesn’t resolve the issue.#2021-08-0320:14dogenpunkI have the SOCKS proxy up and running (I’m able to connect using regular client)#2021-08-0320:16dogenpunkThe ExceptionInfo contains a :config key with a map with a :server-type :cloud instead of the :server-type :ion that I specify when calling import-cloud#2021-08-0401:55dogenpunkOk. Found this answer on http://ask.datomic.com. I guess the required :proxy-port option got (rightly) dropped from the documentation.#2021-08-0321:51Drew VerleeDatomic cloud question: my websocket connect call will correctly connect through my aws lambda proxy to a hander and back. But not my Http none proxy API gateway. The request reaches the app handler, no errors are thrown, but i get a 500 response code. I'm going to try to get more visibility on whats going on at the apigateway layer. ideas appreciated.#2021-08-0322:06Joe LaneWhat is:
> But not my H} none proxy.#2021-08-0322:10jarrodctaylorTurn on logs/tracing in your deployed stage in API Gateway and add logging to your functions that are being called. That allows a good deal of visibility as to where and what the error is.#2021-08-0322:39Drew Verlee@U0CJ19XAM sorry, not sure how that happened. The http proxy API gateway*#2021-08-0322:40Drew Verlee@U0508JRJC yep. That's where I think I should go next, thanks for the suggestion.#2021-08-0413:54Drew Verleeit's clear i need to do more configuration, but its also becoming more clear that this level of configuration (request and response integration/templating) at the aws level isn't ideal. I feel like the proxy should be the way to go.
But it is websocket can make a connection now, it was clear once i could see the logs what the issues were.#2021-08-0411:40zendevil.ethwhat is the classpath of datomic.api peer library in the datomic full distribution? I’m running using clj a file that requires datomic.api inside the datomic distribution but it isn’t picking up datomic.api, while bin/repl does pick up datomic.api#2021-08-0415:36favilarun bin/classpath It’s basically lib/* and the two datomic-pro*.jars in the root#2021-08-0415:46zendevil.eththis is my deps.edn#2021-08-0415:46zendevil.eth1 {:paths ["./"
1 "lib/*"
2 "target/classes"
3 "build/src"
4 "bin"
5 "src/clj"
6 "test/src"
7 "samples/clj"]}#2021-08-0415:47zendevil.ethand when I run clj create_db.clj in the same directory as the deps.edn which is the root of the datomic distribution, it says:
Could not locate datomic/api__init.class, datomic/api.clj or datomic/api.cljc on classpath.
#2021-08-0415:54favilaThe jvm doesn’t include a jar on the classpath unless explicitly referenced or you use “*”#2021-08-0415:54favila"./" is not actually including the datomic jars#2021-08-0415:55favila$ bin/classpath
resources:datomic-transactor-pro-1.0.6269.jar:lib/*:samples/clj:bin:
#2021-08-0415:56favilaI take back what I said, I thought this included the datomic-pro jar#2021-08-0415:56favilayou need datomic-pro-1.0.<whatever>.jar and every jar in lib, and whatever storage driver#2021-08-0415:57favilaand these are probably not deps.edn paths. Have you considered just invoking with an explicit classpath?#2021-08-0415:57zendevil.ethhow do you do that?#2021-08-0415:59favilaclojure -Scp ...#2021-08-0416:00favilaor just java -cp … clojure.main the-script.clj to avoid clj completely#2021-08-0416:03zendevil.ethactually using “*” in deps works splendidly#2021-08-0420:08uwoQuick Q: If you retract a datom with a noHistory attribute, will that data eventually be tossed? (I understand that noHistory is not intended to make semantic guarantees.)
Put another way, if an attribute is not "high-churn", but it is eventually retracted, will noHistory eventually clean that up?
(Tangentially, also I understand that Datomic is not the place to put operational data.)#2021-08-0512:34favilaI don’t think already-indexed datoms are ever removed except by excision#2021-08-0512:39favilaI don’t have inside knowledge here, but I suspect what noHistory means is literally “when updating the index with unindexed datoms from the tx log, don’t write any of these datoms to the history index, just the current (assertion-only) index.”#2021-08-0519:21uwoThanks Favila!#2021-08-0519:50uwoso to be explicit, the guess here is that a retraction would not be enough to trigger the behavior of noHistory?#2021-08-0519:55favilaI wouldn’t count on a retraction retroactively rewriting (i.e. removing entry from) history indexes for an attribute. Maybe it does, maybe it only updates those portions of the index that are “dirty”, maybe it ignores the history index in code and doesn’t rewrite old indexes, donno.#2021-08-0519:56uwoGotcha. Thanks again, Favila.#2021-08-0519:56favilaI will say, if an attribute has always been nohistory, then obviously no history index is going to contain those datoms. I’m thinking mostly of cases where you might be turning nohistory on and off over time.#2021-08-0519:57favilaIn that case, any history you see via d/datoms (or whatever query) for that attribute is coming from the memlog, not from the indexes.#2021-08-0519:57favilaand it will “disappear” once there’s a reindex#2021-08-0520:04uwoso are we hedging toward noHistory+retraction eventually reclaiming space then?#2021-08-0520:08favilaIn the case of “always been noHistory”? Not really. It’s not reclaiming space, it’s just not writing history (i.e. the space was never used).#2021-08-0520:10uwoThis is coming up because my team is considering writing high-volume metadata that is accrete-only, i.e. the individual values are never updated. Their hope was that if the values are eventually retracted then it would be less of a capacity planning issue. (They are aware that datomic is not a good fit for operational data, but there is time-pressure.)#2021-08-0520:19uwoI guess my question there is, is noHistory+retraction really doing anything for us, since these values are only written once. My suspicion is no, but I thought I'd ask.#2021-08-0521:01favilaNo, that would help in this way: when a value is retracted, the total datom count of the database will eventually (after the next index) reduce by one instead of increase by one.#2021-08-0521:05favilaI’m wondering if you don’t have a mental model of what’s actually written to indexes and how nohistory influences that. Maybe this will help? https://tonsky.me/blog/unofficial-guide-to-datomic-internals/#2021-08-0521:09favila“noHistory” means “write datoms to the ‘current (now, effective-assertions-only)’ sub-index, not to the history (retractions and old no-longer-in-effect assertions) sub-index”. Normally datoms get written to both, and a complete history is reconstructed by merging them together. Nohistory only writes assertions to the “current” sub-index.#2021-08-0521:10favilaSo when retracting the assertion and retraction datom in effect “disappear” because they never get written to the history index#2021-08-0522:42uwoGroovy. Thanks a ton Favila#2021-08-0522:52ghadinoHistory != update-in-place#2021-08-0603:28uwo@U050ECB92 absolutely. I don't think we were thinking of it that way.#2021-08-0423:29naomarikThere a way to use pull syntax on an entity if it's a ref many within a hetero tuple? Instead of returning something like this:
:space/included-addons [[17592186049388 0 1]] the entity would be resolved out the the fields I'd like to select.
Result of this {:space/included-addons [*]}
1. Unhandled datomic.impl.Exceptions$IllegalArgumentExceptionInfo
:db.error/invalid-lookup-ref Invalid list form: [17592186049388 0 1]
{:cognitect.anomalies/category :cognitect.anomalies/incorrect,
:cognitect.anomalies/message "Invalid list form: [17592186049388 0 1]",
:db/error :db.error/invalid-lookup-ref}
#2021-08-0519:50uwoso to be explicit, the guess here is that a retraction would not be enough to trigger the behavior of noHistory?#2021-08-0505:00zendevil.ethI did create the db with the transactor running, but now I’m getting this:
./bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d humboi,"datomic:"
Execution error (ActiveMQNotConnectedException) at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl/createSessionFactory (ServerLocatorImpl.java:699).
AMQ219007: Cannot connect to server(s). Tried with all available servers.
Full report at:
/var/folders/96/df02xppj77g7dx698gtmwmrw0000gn/T/clojure-5121092977512187181.edn
Full report:
https://gist.github.com/zendevil/d9f48df00fa243dfcea687f7f1a9d38c#2021-08-0515:39JohnJAre you using Java 16?#2021-08-0515:41JohnJthe version of activemq that on-prem uses doesn't support it#2021-08-0515:48zendevil.ethnot sure:
{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "b7c7c5dedcc4dfd2c3c4dfd6c5dad6f7e7c5dedcc4dfd2c3c49afad6d4f5d8d8dc9ae7c5d8"}, :content ("[email protected]")}#2021-08-0515:52zendevil.ethjava 8 I think#2021-08-0515:52JohnJshould work, is the transactor running?#2021-08-0516:20zendevil.eth:sql: doesn’t work but :mem: does#2021-08-0516:36zendevil.ethI was running the transactor and the peer on the same port. Now it works#2021-08-0513:53woxI’m wondering what is happening behind the scenes with this query throwing an Execution error (NullPointerException) at datomic.datalog/project (datalog.clj:702).:
(d/q '[:find ?b
:in $ ?range-start ?range-end
:where
[?b :entity/type :entity.type/booking]
(or-join [?b ?range-start ?range-end ?foobar]
(and [?b :booking/assignee ?me]
[?b :booking/actual-start-time ?booking-start]
[(>= ?booking-start ?range-start)]
[(< ?booking-start ?range-end)]))]
(db)
#inst"2020-01-01"
#inst"2022-01-01")
There’s a couple of obvious bugs, ?foobar is not bound to anything and neither is ?me , but the weird thing is that having the first two clauses inside the and swapped does not crash. Does anyone know what’s different when they are the other way around?#2021-08-0513:53woxi.e. this works
(d/q '[:find ?b
:in $ ?range-start ?range-end
:where
[?b :entity/type :entity.type/booking]
(or-join [?b ?range-start ?range-end ?foobar]
(and [?b :booking/actual-start-time ?booking-start]
[?b :booking/assignee ?me]
[(>= ?booking-start ?range-start)]
[(< ?booking-start ?range-end)]))]
(db)
#inst"2020-01-01"
#inst"2022-01-01")
=> #{[17592186051383]}#2021-08-0513:55woxand so does having the assignee a second time in the end
(d/q '[:find ?b
:in $ ?range-start ?range-end
:where
[?b :entity/type :entity.type/booking]
(or-join [?b ?range-start ?range-end ?foobar]
(and [?b :booking/assignee ?me]
[?b :booking/actual-start-time ?booking-start]
[(>= ?booking-start ?range-start)]
[(< ?booking-start ?range-end)]
[?b :booking/assignee ?me]))]
(db)
#inst"2020-01-01"
#inst"2022-01-01")#2021-08-0513:56woxand when ?foobar is removed either way works#2021-08-0518:39souenzzo@U7PQLLK0S I'm not from datomic team but it feels like a bug.
can you share which datomic version you are using, which kind of connection (mem/file/sql...), maybe the JVM version/release too.#2021-08-0607:48woxthis occurred with 1.0.6269 both with sql connection (Linux, JVM 11.0.12) and file connection (macOS, JVM 11.0.10)#2021-08-0607:49woxI tried this now locally with 1.0.6316 as well and the behavior is the same#2021-08-0619:43jaretHi All! @audiolabs made some new Datomic Cloud setup/getting started videos and we'd appreciate retweets for visibility! If you haven't had a chance to try out Cloud maybe this is your sign. Cheers and happy friday!#2021-08-0619:43jaretNew Datomic Cloud setup and getting started videos:
• Setup - https://twitter.com/datomic_team/status/1423727546475102215?s=20
• Getting Started - https://twitter.com/datomic_team/status/1423727955537276937?s=20
{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 6")}
#2021-08-0621:01bedersThat’s great!
What I’m currently looking for is Datomic product information that is suitable for a non-technical Product Management team. Datomic’s website mostly is pretty technical and feature highlights quickly go into technical details.
Can you recommend a resource that answers the question: Why would I adopt Datomic for my business?#2021-08-0721:32Drew VerleeCan you share what your business is? It might help focus the answer.#2021-08-0805:40bedersWe are a financial services company and are weening off our users from SFDC.
Datomic would be a great fit for our needs.
What would be helpful is good ole marketing material for a non-technical audience#2021-08-0912:58jaret@U628K7XGQ I think this is an area we could do better story telling, but our customers have done some of this in the past. Particularly @U06GS6P1N a few years ago with this post: https://medium.com/@val.vvalval/what-datomic-brings-to-businesses-e2238a568e1c#2021-08-0916:38bedersthank you!#2021-08-0908:56weiI have the following schema:
{:db/ident :token/uuid
:db/cardinality :db.cardinality/one
:db/valueType :db.type/uuid
:db/unique :db.unique/identity}
{:db/ident :token/ordinal
:db/cardinality :db.cardinality/one
:db/valueType :db.type/long}
{:db/ident :token/project
:db/cardinality :db.cardinality/one
:db/valueType :db.type/ref}
{:db/ident :token/project+id
:db/tupleAttrs [:token/project :token/id]
:db/valueType :db.type/tuple
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
:token/ordinal is optional, and I'd like for the composite check :token/project+id to run only when :token/ordinal is not nil.
Basically, I'd like like to make sure each token ordinal is unique per project, as long as it's set. Is that possible?#2021-08-0910:23Jakub Holý (HolyJak)I know little about Datomic but I guess https://docs.datomic.com/on-prem/schema/schema.html#entity-predicates should allow you to do that, if there is nothing better#2021-08-0910:26weithanks, that's very helpful!{:tag :div, :attrs {:class "message-reaction", :title "tada"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🎉")} " 4")}
#2021-08-0910:26Jakub Holý (HolyJak)Question: In RDBMS, a cascading delete can have multiple levels: if table A depends on B that depends on C and I delete a row in C then the related rows in both B and A will be deleted. This can be modelled in Datomic with C being isComponent . But in RDBMS I can also only delete a row in B, which would then delete all its related rows in A. I suppose that is not possible in Datomic since, I guess, a subcomponent (B being a subc. of C) cannot be a component in its own right to its own subcomponents. Correct? Is there any good solution?#2021-08-0911:59favilaDatomic attrs are graph edges, so this example doesn’t map to the table model#2021-08-0912:00favilaIsComponent does two things:#2021-08-0912:01favilaFor given [a :component-attr b], retractEntity a will also retractEntity b. #2021-08-0912:02favilaThis is recursive, so you can retractEntity a chain of entities this way#2021-08-0912:03favilaSecond thing: the reverse-relation in map projections will be cardinality-one.#2021-08-0912:04souenzzo@U0522TWDA
:db/retractEntity is a transaction function
https://docs.datomic.com/cloud/transactions/transaction-functions.html#built-in
You can re-implement :my.db/retractEntity with something like that
(defn my-custom-retract
[db eid]
(for [[e a v is-component] (d/q '[:find ?e ?a ?v ?is-component
:in $ ?e
:where
[?a :db/ident]
[?e ?a ?v]
[(get-else $ ?a :db/isComponent false) ?is-component]]
db (d/entid eid))
tx (cons [:db/retract e a v]
(when is-component
(my-custom-retract db v)))]
tx))
Once knowing it, you can do one of these:
• create your specific retract-my-entity-a, that knows what to do when you retract a
• create a generic :my-custom/is-component attribute and a custom my-custom-retract that implement retractions in the way that you expect.#2021-08-0912:06souenzzoPS: my-custom-retract will be a classpath function
https://docs.datomic.com/cloud/transactions/transaction-functions.html#custom
You can also use "database functions"
But these database functions seems to be deprecated//not recommended.
https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/function#2021-08-0912:18Jakub Holý (HolyJak)Thanks a lot, both of you!
> Datomic attrs are graph edges, so this example doesn’t map to the table model
Aren't Foreign Keys in RDBMS also "graph edges"?
> This is recursive, so you can retractEntity a chain of entities this way
That is what I expected, when retracting the top-level entity C (i.e. a Conglomerate let's a company it owns go bankrupt, which removes all its departments and employees). What I was after is the ability to retract a thing deeper in the dependency tree and have all its subdependants have also retracted - e.g. we decide to cancel a department and fire all its employees.
So I guess that, as @U2J4FRT2T suggests, I would need to add custom attributes to make this hierarchy of dependants (A -> B -> C) and a custom tx function that would automatically delete all relevant A entities when their parent B is deleted. I suppose I could use the card. one reverse relation to query for all the dependent entities and retract them manually.#2021-08-0913:06favila> Aren’t Foreign Keys in RDBMS also “graph edges”?#2021-08-0913:13favilaI was speaking particularly to this sentence “if table A depends on B that depends on C”. Columns and foreign keys are defined on a table, it makes sense to say “table A depends on B”--it’s a graph edge, but it’s both “type” (table) and “instance” (row) level. But datomic attributes can be asserted on any entity and are stand-alone: they represent only the instance graph edge, not the subject or object entity. So it doesn’t make sense to speak this way, there’s no corresponding “table” in datomic.#2021-08-0913:15favilaYou can make your own relation semantics as @U2J4FRT2T suggests, just like retractEntity does with isComponent. However I’ve found myself avoiding even isComponent because the deletion semantic is too baked-in and the “only one reference” semantic isn’t enforced strongly enough to be useful.#2021-08-0913:17favilaIn sql, “delete” has a well-defined meaning (remove a row), so maybe the foreign key relations (cascading, aborting, etc) make sense.#2021-08-0913:19favilain datomic, there is no “delete” in the same way. Entities are just IDs, you can’t “delete” them. the closest analog is “retract every asserted datom where the entity matches the E or V slot of the datom”, and that’s often not what I want. E.g., we have some attributes#2021-08-0913:20favilacommon to many entity “types” which we don’t want deleted. Similarly, we don’t want transaction audit metadata to be retracted when the entity is retracted.{:tag :div, :attrs {:class "message-reaction", :title "thinking-face"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "thinking-face", :src "https://emoji.slack-edge.com/T03RZGPFR/thinking-face/53a2a18cdc3b2e6d.gif"}, :content nil})} " 2")}
#2021-08-0915:13Tatiana KondratevichHi! Question about env-map and config settings.
Can i store configuration like this in aws parameter store?
{:server-type :ion
:region "<your AWS Region>" ;; e.g. us-east-1
:system "<system name>"
:endpoint "<your endpoint>"}
I want configure comfortable switching between dev and prod base. (on aws only prod and locally only dev). If I can't do this with parameter store how I can do this?#2021-08-1114:41hdenIt really depends on how you manage your environments / profiles.
The general goal is default to production, and override the parameters (in this case, the SSM key) locally.
If you are using duct, there is a library for that purpose.
https://github.com/hden/duct.module.datomic#usage#2021-08-1219:49Joe LaneHi @U028H6X0KRS , we support this scenario. See this blogpost with links to our docs.
https://blog.datomic.com/2018/08/ion-parameters.html
You can supply a different env-map per environment. #2021-08-2011:46Tatiana Kondratevich@U0HLHE6JE @U0CJ19XAM Thanks so much{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-08-1010:20Jakub Holý (HolyJak)Hi! Has anyone tried to build Gremlin support for Datomic (for Apache TinkerPop v3; I see there was a PoC "blueprint" for an old version)? Thanks#2021-08-1014:50Jakub Holý (HolyJak)Does Datomic (on-prem) support Full-Text Search or does it have any integration with ElasticSearch? Neither scanning the docs nor searching the Internet answered that. Thank you!#2021-08-1014:56favilaOn-prem supports fulltext: https://docs.datomic.com/on-prem/schema/schema.html#operational-schema-attributes#2021-08-1014:58favilaIt’s very limited and you cannot remove it later. If you want flexible and powerful fulltext search over large text corpuses I suggest feeding the datomic transaction log into a secondary elasticsearch service and using that for fulltext searches.#2021-08-1014:58favilaIf you just have tiny text fields and just want some kind of low-headache fuzzy matching, it’s probably fine.#2021-08-1017:11Jakub Holý (HolyJak)Awesome, thanks!#2021-08-1308:35Jakub Holý (HolyJak)@U09R86PA4 perhaps it would be good if https://docs.datomic.com/on-prem/schema/schema-change.html#schema-alteration listed :db/fulltext in the "You can never alter ..." list? Not sure how to propagate that suggestions to the Datomic team...#2021-08-1311:43jaretHi @U0522TWDA it's already there: You can never alter _:db/valueType_, _:db/fulltext_, _:db/tupleAttrs_, _:db/tupleTypes_, or _:db/tupleType_.#2021-08-1311:51Jakub Holý (HolyJak)Ah, thank you, I'm blind today :)#2021-08-1016:25kennyDo the following two dbs exhibit the same performance?
(def db (d/db conn))
(def db-as-of (d/as-of db (:t db)))
Put another way, as (:t db) grows further from the current :t (e.g., current :t is 1 > (:t db), 10 >, ..., 1e6 >, ...), does db pay the same performance penalty an as-of db pays? (current :t = (:t (d/db conn))).#2021-08-1016:32favilaA “current” (unfiltered) db is always fastest, because it only ever has to retrieve the current index. It only merges against that index and the memlog.#2021-08-1016:34favilaPotentially an as-of >= last-indexed-t will do the same and be the same complexity, but I donno.#2021-08-1016:34favilaanything older is going to have to look at the history or mid-history indexes to reconstruct the value at that moment in time.#2021-08-1016:35favilawhether this is slower-enough in practice depends on object cache, valcache, datasize, selectivity of queries, etc#2021-08-1016:35favilabut you’re merging across 3-4 indexes instead of just 2#2021-08-1016:57kennyInteresting, thanks for responding. By "anything older is going to ...", do you mean both an as-of and an unfiltered db would go down this path?#2021-08-1017:23favilaan unfiltered db is by definition a “now” db, so only looks at the “now” index--the one that only has assertions valid at the moment it was indexed.#2021-08-1017:24favilaBy “anything older” I mean anything that needs to look at datoms older than the last-indexed T#2021-08-1118:33prncHi 👋,
question about IONS.
I’m running an “older” version (668-8927) of solo topology.
And I’m seeing api gateway integration error: “The response from the Lambda function doesn’t match the format that API Gateway expects. Lambda body contains the wrong type for field “headers” this only happens on redirect response as far as I can tell.
This is coming from an ionized handler.
I have v. little experience with ions, lambda etc. so maybe this is a common problem? Thanks!#2021-08-1118:43Joe LaneWhat headers, if any, are you returning on a redirect response? Presumably you are the one redirecting, correct?#2021-08-1119:58Daniel JompheHi! When I used ion lambdas, IIRW, I wrapped my responses thusly:
(defn OK [body-str] {:status 200
:headers {"Content-Type" "application/json"}
:body body-str})#2021-08-1120:27prncResponse is along the lines of…
{:status 302,
:headers {"Location" "/projects"},
:body "",
:flash {,,,},
:session {:identity ,,,}}
pretty standard I guess. Need to add some logging around this to gain bit more visibility. Which is actually my other question. In an Ions app is “normal” logging supposed to be replaced/supplemented with e.g. cast/event . https://docs.datomic.com/cloud/ions/ions-monitoring.html#java-logging suggests that this is done somewhat automatically when logging is going through slf4j, without additional configuration, am I understanding that correctly? Cheers!#2021-08-1200:31Daniel JompheI think you're right, but like the page said, only levels above WARN are cast this way.#2021-08-1206:05prncThanks @U0514DPR7!{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-08-1120:08Ben HammondIs there a https://github.com/cognitect-labs/aws-api
that I can use to exchange a cognito authorisation code for an access token as in
https://docs.aws.amazon.com/cognito/latest/developerguide/token-endpoint.html?#2021-08-1122:07weiwhat's the max size on a transaction? if I need to divide my transaction into smaller batches, is there an easy way to share tempids?#2021-08-1122:46thumbnailWe've hit timeouts using client at some reasonably big transactions (usually during ETL).
About sharing the tempids, no generic advice, maybe you can try to batch the entities so it's less of an issue, or propegate the temp-ids from the previous transaction into the next{:tag :div, :attrs {:class "message-reaction", :title "pray"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙏")} " 2")}
#2021-08-1122:13weiactually it turns out I was hitting the default 1000 pull limit. that said, my question stands out of curiousity. would be neat if I could plug in a tempid map returned by a previous transaction and have datomic use it to translate tempids in a the current transaction#2021-08-1613:53matthavenerthis is a relatively straightforward function to write. you can even do it very naively with clojure.walk{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-08-1219:52Joe LaneIf you declare a unique attribute value pair per entity in your tx-data you don’t need tempids. {:tag :div, :attrs {:class "message-reaction", :title "zap"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("⚡")} " 3")}
#2021-08-1302:23nwjsmithIn HTTP direct Ions, how do you get access to the query string for a request? Is it appended to the value of the :uri request map entry?#2021-08-1313:55Daniel JompheI get it exactly like the standard Ring adapter expects it, and it exposes it as :query-string for me.#2021-08-1314:31nwjsmithThanks for looking into it, the docs aren’t complete (https://docs.datomic.com/cloud/ions/ions-reference.html#web-ion) and I haven’t spun up my Cloud env yet. I’ll pop into the forum to see if I can get this added to the docs 👏 #2021-08-1314:34Daniel JompheRe-reading the doc you provided, I must temper a bit my answer:
I get it mostly like the standard Ring adapter expects it, and the adapter exposes it as `:query-string` for me.#2021-08-1310:35prncI’ve updated solo to 884-9095, still t3.small instance though, what is the recommended way to updated storage size for this instance (default 8GB).
Is this something that I could in the cloud formation template, so it’s always brought up with the new default size by ASG? Or create a modifying Launch Configuration with attached volume? Please advise how to go about this :)
Context: I’m seeing No space left on device - /opt/codedeploy-agent/deployment-root/9d00fd66-e380-4208-af18-088da384cd80/d-TX1A7WCBC/bundle.tar types of errors during deployments DownloadBundle step.
Thanks!#2021-08-1318:45Jakub Holý (HolyJak)Hello! I am trying to run clj inside a local clone of https://github.com/Datomic/day-of-datomic/ after having added the cognitct-dev-tools repo to my global deps.edn and ~/.m2/settings.xml but it fails with
...
Downloading: com/datomic/client-impl-shared/0.8.86/client-impl-shared-0.8.86.jar from central
Downloading: com/datomic/dev-local/0.9.232/dev-local-0.9.232.jar from cognitect-dev-tools
Downloading: org/apache/httpcomponents/httpclient/4.5.2/httpclient-4.5.2.jar from datomic-cloud
Error building classpath. Could not transfer artifact org.apache.httpcomponents:httpclient:jar:4.5.2 from/to central (): status code: 416, reason phrase: Range Not Satisfiable (416)
Why? I see the version exists in maven central. Though from the prev. line it seems it is downloading from "datomic-cloud" (the <s3://datomic-releases-1fc2183a/...>)?{:tag :div, :attrs {:class "message-reaction", :title "white_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✅")} " 2")}
#2021-08-1318:53Jakub Holý (HolyJak)Fixed by rm -rf ~/.m2/repository/org/apache/httpcomponents/httpclient/4.5.2 as suggested by https://stackoverflow.com/a/15462387#2021-08-1318:47Alex Miller (Clojure team)What version of the Clojure CLI are you using? (`clj --version` or clj -Sdescribe if older)#2021-08-1318:54Jakub Holý (HolyJak)1.10.3.933#2021-08-1318:53Jakub Holý (HolyJak)Fixed by rm -rf ~/.m2/repository/org/apache/httpcomponents/httpclient/4.5.2 as suggested by https://stackoverflow.com/a/15462387#2021-08-1319:20Alex Miller (Clojure team)ah, could have been a bad download I guess#2021-08-1319:20Alex Miller (Clojure team)or bad version metadata file{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-08-1512:54weiare there any plans to support lazy entity crawling in datomic cloud? I miss that super rad feature of datomic on-prem#2021-08-1518:43kennyUnlikely. The entity API is a bad fit for Cloud because it is lazy. See https://docs.datomic.com/on-prem/overview/clients-and-peers.html#peer-only{:tag :div, :attrs {:class "message-reaction", :title "pray"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙏")} " 3")}
#2021-08-1611:11Jakub Holý (HolyJak)Not sure where to report, there is a typo at https://docs.datomic.com/on-prem/operation/capacity.html#multiple-databases : "database, or or colocating multiple databases" and "For sh, is is not uncommon"#2021-08-1611:16jaretThanks for catching this. I’ll fix it today.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 6")}
#2021-08-1611:49Jakub Holý (HolyJak)A question about multiple DBs and transactors. https://docs.datomic.com/on-prem/operation/capacity.html#multiple-databases:
> When you serve multiple databases with Datomic, you have a choice of provisioning separate transactor pairs per database, or or colocating multiple databases on a single transactor pair.
But I cannot find where to configure that. The only https://docs.datomic.com/on-prem/configuration/system-properties.html#transactor-properties or that can be https://docs.datomic.com/on-prem/overview/storage.html#cassandra is related to the storage cluster, nothing about individual DBs. I know I can start multiple peers / peer group and point each to a particular subset of DBs but I do not see how to run a number of transactors, each serving only a subset of DBs, and how to get the peers to connect to the correct transactor?#2021-08-1611:59favilaTransactor properties do not include a database#2021-08-1612:00Jakub Holý (HolyJak)Yes, it seems so. So how should I understand this sentence: "you have a choice of provisioning separate transactor pairs per database"? 🙏#2021-08-1612:01favilaMake more transactor with their own storage#2021-08-1612:02favilaA connection string has storage + datomic-db info#2021-08-1612:02favilaYou can have one transactor pair per storage but multiple dbs#2021-08-1612:02Jakub Holý (HolyJak)I see, thank you. So in the case of Cassandra, I would need one Cassandra cluster per a set of DBs. That makes sense.#2021-08-1612:03Jakub Holý (HolyJak)The Datomic on-prem license is "per system". Do you know what a "system" is? Is it essentially per one HA transactor pair?#2021-08-1612:04favilaNo you could use the same Cassandra cluster but a different key space{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-08-1612:11favilaRe:system, I think so.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-08-1612:37Jakub Holý (HolyJak)Thanks a lot!#2021-08-1612:47prncHope you don’t mind that I try again.
I’m seeing No space left on device - /opt/codedeploy-agent/deployment-root/.../bundle.tar types of errors during deployments DownloadBundle step. Seems like code deploy agent is running out of space for revisions archive.
Is there a way to change the root volume size that’s attached to the compute instance (running 884-9095, on t3.small w/ the default 8GB storage)?#2021-08-1613:38Jakub Holý (HolyJak)Any idea why I might be getting "Could not transfer artifact com.datomic:datomic-pro:pom:1.0.6316 from/to http://my.datomic.com (https://my.datomic.com/repo): status code: 401, reason phrase: Unauthorized (401)" even though I have added <server<id></id>... to ~/.m2/settings.xml as suggested by https://my.datomic.com/account ? 🙏 Could it be because my account is too new?{:tag :div, :attrs {:class "message-reaction", :title "white_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✅")} " 3")}
#2021-08-1613:47Joe LaneAre you using Lein?#2021-08-1613:53Jakub Holý (HolyJak)no, deps.edn / clj#2021-08-1613:53Joe LaneDoes it work now that you have ✅ ?#2021-08-1613:53Jakub Holý (HolyJak)Found it, I misplaced the <server> section 😅{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-08-1613:49Linus EricssonIs there any open source code for the functionality of (selective) respooling datomic databases?#2021-08-1613:50favilaWhat is “respooling”?#2021-08-1613:51Linus Ericssonselectively lift data from an old db to a new one (as mentioned in some talks from Nubank).#2021-08-1613:51favilain a history-preserving way? or just copying?#2021-08-1613:53Linus Ericssonhistory preserving but possibly rewriting/never transact certain things (GDPR use cases among other things).#2021-08-1613:53favilaThe history-preserving method is usually called “decanting”. I’m not aware of any open source code for this and having written a few I’m not sure what it would contain--it tends to be very bespoke and depend on the data model#2021-08-1613:55Linus Ericssondecanting is the word. Thanks for confirming that this tends to be use-case specific. This means we don't waste our time thinking about if with the ambition to build something from scratch.#2021-08-1613:55favilathe basic technique is read the tx log in order, transact that into the target db, and retain somewhere a mapping of eids in the old db to the new id as you go.#2021-08-1613:56Jakub Holý (HolyJak)I guess https://github.com/fulcrologic/datomic-cloud-backup/ could be a good starting point? It also supports filtering/rerwriting data so that prod data can be used in a stage env in a privacy-friendly way{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-08-1613:59Linus EricssonThanks for the pointers and hints. We will look into this for our particular use-case. The devil is in the details, as always.#2021-08-1620:20Jakub Holý (HolyJak)Hi! If I understand it right, you can enforce integrity constraints on data but must remember to include the relevant :db/ensure on any transaction that creates/modifies the entity in question. How does that work for you in practice? Is it a non-problem or does it happen that a developer forgets to include it and makes a change that leads to invalid data (such as creating an employee without a (required) department)? Thank you!#2021-08-1720:14Daniel JompheHi! With the new Datomic Cloud, if I set a compute group's auto scaling configuration with desired capacity to 1 and minimum number of instances during update to 2, should it spin up a 2nd instance when the update starts before applying the new app code, in order to enable high availability during deployments, and scale back to the desired capacity after the deployment succeeds?#2021-08-1720:15Daniel JompheHere is how I set it up.#2021-08-1720:15Daniel JompheI tested it and the system/ASG didn't scale up a 2nd instance during the deployment, so it caused downtime. Are my expectations wrong about setting it up like I did?#2021-08-1912:53Daniel Jomphehttps://ask.datomic.com/index.php/650/highly-available-deployments-when-desired-capacity-is-1#2021-08-1913:25jaretThat "min during update" is for Cloud Formation updates (eg. updating to a new Datomic release.) Not Code Deploy. Code Deploy d_eploy_ to running instances. If you want HA during a Code Deploy you would need to have the desired instances set to 2.#2021-08-1913:27Daniel JompheOh, and it was written in the descriptive label just like you say...! Thanks! 😲#2021-08-1913:31Daniel JompheI'm quite impressed to see that it's possible to perform HA compute group updates. 🙂
And @U1QJACBUM then in case I need this someday on a compute group with Desired Capacity set to 1...
In that case, if min during update is 1, will the CloudFormation update scale up to 2 instances temporarily to ensure at least 1 instance is always running during the update?
And what if min during update is 2 but desired capacity is 1?#2021-08-1916:03jaretIf min during update is 1, yes CF should scale up to 2 during an upgrade and then kill the previous instance as long as the maximum allows for that. For the second question if it is 2 then 2 instances will run, but again you have to have ceiling from max.#2021-08-1916:04Daniel JompheThanks for the confirmation!#2021-08-1913:27Jakub Holý (HolyJak)What is a good way to refer to a past state of the DB that would survive backup+restore or something like https://github.com/fulcrologic/datomic-cloud-backup ? I suppose transaction ID changes, when replaying the transactions in a new DB but perhaps (:t (d/db conn)) remains the same, if the transactions are replayed in the same order and none is missed?{:tag :div, :attrs {:class "message-reaction", :title "white_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✅")} " 3")}
#2021-08-1913:33favilamake your own transaction uuids#2021-08-1913:34favilaThat is the only way#2021-08-1913:34favila> perhaps `(:t (d/db conn))` remains the same, if the transactions are replayed in the same order and none is missed?#2021-08-1913:35favilano, because the T value is used for all minted entities. If each transaction is exactly the same and issued in the same order, and you don’t hit edge cases like schema upgrades and such, maybe they will be the same, but I wouldn’t count on it#2021-08-1913:39Jakub Holý (HolyJak)thank you!#2021-08-1915:58Jakub Holý (HolyJak)How do :db.type/ref work? I guess there are no built-in checks and I can set it to something that looks like an entity ID, even if no such entity exists (or it existed but has been retracted). Correct?#2021-08-1916:01favilaThere is I think a trivial check that the T value of the entity-id does not exceed the T value of the database and the partition bits of the entity-id correspond to a partition that actually exists. However there’s no solid notion of an entity “existing” or not--it’s just a number.#2021-08-1916:02favilaI think the easiest way to enforce “the value of this ref-attr attribute is an entity obeying some expectation/interface/contract” is :db/ensure and entity specs.#2021-08-1918:32Jakub Holý (HolyJak)I thought so. Thank you for confirmation!#2021-08-2009:08Jakub Holý (HolyJak)Is there a more efficient way to get the base time of the DB after a transaction whose ID I have than (:t (d/as-of (d/db conn) 13194139533321)) ? Or is this operation cheap and I don't need to worry about it? 🙏
(I am pondering how to implement optimistic locking of a whole data entity, where the attribute-level :db/cas is not sufficient. My idea was to
1) store a ref to the transaction upon any transact changing the entity via .. [:db/add [:some-entity/last-tx "datomic.tx"]] ..
2) when reading an entity from the DB (e.g. (d/pull db ['*] <id>) ), I would also add the base time (which is readily available) to it: (assoc entity :baseT (:t db))
3) when the client submits a change, I need to check whether the entity has changed since the :baseT the client has.
Perhaps there is a better way to achieve this than that?)#2021-08-2010:17Linus EricssonIsn't this what datomic.api/tx->t is for?#2021-08-2010:20Linus EricssonRegarding locking the whole entity, i would suggest a transaction function like [:entity-not-changed-since entity-id tx-id] that throws an exception if the entity in the db given to the transaction function is changed after tx-id. You will not be happy having to keep track of changed-since-data.
Well, if you want to store that a transaction strived to change ´the entity without really changing it (like setting a single cardinality value to it's current value), you need to keep track of it, for instance via "meta data" in the transaction entity.#2021-08-2013:29favilaConverting between a T and a TX is simple bit masking/adding. Use t->tx or tx->t to do it#2021-08-2013:32favilaI second an “check and abort” txfn as a way to approach this. Consider parameterizing it by attributes to check because “lock the whole entity” kinda goes against the semantics of entities—they are not rows in a table with fixed columns, and can support overlapping attr sets from unrelated applications. Consider also making it check by value instead of time: supply a pull expr and an expected value and revalidate that the value is equal or abort{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-08-2013:37favilaIf you still want a time check, you can implement that efficiently-ish with (d/datoms db :eavt e) where db is filtered by (d/since (d/history db) read-tx). If you get any datoms then something happened to the entity. This impl also removes the need to covert t/tx—since accepts both or even insts {:tag :div, :attrs {:class "message-reaction", :title "heart"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("❤️")} " 2")}
#2021-08-2013:37Jakub Holý (HolyJak)Thanks a lot for your ideas!
I see why I did not find tx->, it is not in datomic.client.api. Does it mean it only works on Peers?#2021-08-2013:38favilaAh didn’t realize this was client#2021-08-2013:38favilaClient doesn’t have it, but you can reimplement by masking out the 20 bits above the bottom 42 bits{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-08-2013:39favilaThat’s tx->t#2021-08-2013:39Jakub Holý (HolyJak)I did not say whether client or peer so you could not notice 🙂#2021-08-2013:40Jakub Holý (HolyJak)I do not really care about "time", just about "version". Fully agree with preferring not to lock, I want to use :db/cas wherever possible. But it is possible that some places that is not enough#2021-08-2013:43Jakub Holý (HolyJak)@UQY3M3F6D What approach did you have in mind for implementing the "the entity in the db given to the transaction function is changed after tx-id"? The same as @U09R86PA4 proposes above with history, since, datoms?#2021-08-2013:43Linus Ericssonyes, that's about what I had in mind{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-08-2114:40hdenThe canonical way is to implement your transactional logic in an ion and use it in a (d/transact ...) call.
I suppose that by
> it is possible that some places that is not enough
you meant a remote transaction.
Maybe you can create an opportunistic lock using an attribute?
for example
{:db/ident :system/revision
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one}
when reading for transaction, always read the :system/revision attribute along with other attribute.
When committing the changes, always transact with a [:db/cas 42 :system/revision revision (inc revision)] operation.
When multiple client concurrently performs the read-and-commit operation, only the first one will succeed and others must prepare for a retry.#2021-08-2115:52Jakub Holý (HolyJak)Thank you. By "not possible at all places" I meant that in some cases the business logic may disallow partial changes to the entity and require that the whole entity has not changed since the user read it. "might" because this is just an assumption, I do not know the code base and business rules well yet.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2021-08-2009:45Jakub Holý (HolyJak)Q2: I want to allow users to explore what-if scenarios. I can trivially do that with Datomic using d/as-of (for the starting point) and applying a list of the changes they make via d/with. But this is only in memory, I would like to be able to persist these scenarios, until they are not needed anymore. What is a good way to do that? Store them outside of Datomic? Or store them as strings (e.g. having [{:db/ident :whatif/start-point, :db/type :db.type/ref, :db/doc "ref the transaction when we branch off", ...}, {:db/ident :whatif/changes, :db/type :db.type/string, :db/cardinality :db.cardinality/many} )? 🙏#2021-08-2010:30Linus EricssonOne way (which i think requires some carefullness in the data modelling) is outlined by Tim Ewald here: https://docs.datomic.com/on-prem/learning/videos.html#reified-transactions
This talk is mind boggling, IMHO. The basic idea is to make queries aware of which transactions they actually use in the queries. I haven't fully grasped how things would work if several sagas (multi-transaction collections) work with the same unique entities. Probably not very well.
Another way is to store the d/with transaction results as edn-data in transactions (or anywhere, really). there needs to refer to either a realized database version or another stored transaction result. If these what-if:s are limited in size it should be quite a quick operation to be able to calculated the databases (very much event sourcing).{:tag :div, :attrs {:class "message-reaction", :title "thinking_face"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-08-2013:48Jakub Holý (HolyJak)Thanks a lot, will check the talk. I hope I will not end up 🤯 🙂
> store the d/with transaction results as edn-data in transactions
so, essentially, as a string, right?#2021-08-2013:54Joe Lane@U0522TWDA How long do you want to persist these scenarios? Are these scenarios the dominating entity in your data-model? Are the changes/transactions against the speculative scenarios (alternate-timelines) generated by humans or machines?#2021-08-2013:55Jakub Holý (HolyJak)Few days to weeks, I guess. No, not dominating. Generated by humans - analysts thinking about the future and modelling different future scenarios for discussion.#2021-08-2013:59Joe LaneCan you model this entity in a more first-class way rather than speculation on d/with dbs? Seems like this might be a better fit than keeping track of the datoms added to a speculative db as strings, etc.#2021-08-2014:02Jakub Holý (HolyJak)What this is all about is people starting from a graph and then making various modification to the graph, then comparing it to the original one or other such scenarios. To make it a 1st class, I would perhaps need to copy all the graph entities to new ones (and link them to their originals) - then I could work with these freely. And this is what we do now with Mongo. I just thought that leveraging d/as-of and a list of the transactions=changes would be a very simple, low-effort way to reimplement it on top of Datomic.#2021-08-2014:30Joe LaneHave you considered modeling each version of the graph as a new entity with an adjacency list pointing to nodes as a card-many ref? That way, you can "copy" every existing node relationship (edge) from a prior version of the graph, determine which edges need to point to new nodes, and then transact the new graph entity with a new version and adjacency list (and possibly the new nodes) all in a single transaction?{:tag :div, :attrs {:class "message-reaction", :title "thinking_face"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-08-2014:31Joe Lane(Not sure if you need a transaction function, depends on the semantics your biz requirements have)#2021-08-2114:52hden> Generated by humans - analysts thinking about the future and modeling different future scenarios for discussion.
AFAIK, d/as-of is designed to be a snapshot of the database. It was never intended to be used as a fork.
I once designed the schema for a optimization engine that need to concurrently explore multiple possible futures. What worked for me was to implement a graph-like data-structure, quite like the structure-sharing pattern seen in the persistent map in clojure core.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-08-2115:38Jakub Holý (HolyJak)Thanks for sharing!#2021-08-2011:42Tatiana KondratevichHey!
Can you tell me, please, if I can use datsync (https://github.com/metasoarous/datsync) with datomic cloud?
I see this code block:
(ns your-app
(:require [dat.sync.client]
[dat.remote]
[dat.remote.impl.sente :as sente-remote]
[datascript.core :as d]
[dat.reactor.dispatcher :as dispatcher]))
(def conn (d/create-conn dat.sync.client/base-schema))
But I dont understand how link this with datomic cloud? Can someone help?#2021-08-2013:57Jakub Holý (HolyJak)It seems the snippet is from the client side and thus has 0 to do with datomic. The server side, connected to the other end of the sente websocket channel is what needs to talk to the DB#2021-08-2013:59Jakub Holý (HolyJak)under https://github.com/metasoarous/datsync#on-the-server - Receiving transactions - there are example calls to (d/q ...) - these would use the datomic cloud client API (assuming the backend runs e.g. on ions)#2021-08-2014:02Tatiana Kondratevich@U0522TWDA I understand it should be http on ions?
If I want to have transaction access for all my functions in ion, do I still have to do the routing?
I mean, I thought it would allow me to make one entry point for all kinds of transactions.
I will be very grateful if you help me to understand this.#2021-08-2014:05Jakub Holý (HolyJak)Sorry, I know nothing about datsync so cannot really assist. I have no idea what the sentence "If I want to have transaction access for all my functions in ion, do I still have to do the routing?" means.
I believe ions also support websockets (not just https) nowadays.#2021-08-2014:12Tatiana Kondratevich@U0522TWDA
Sorry. I meant that I am interested in having access to all functions through ion api.
To use an endpoint and recognize what I want to perform, for example, the function of adding to the database, and on another endpoint, for example, getting data from the database. so that the client can work comfortably with it.
I meant it when I mentioned routing.#2021-08-2014:47Jakub Holý (HolyJak)You can create as many or few endpoints and ion functions as you want.#2021-08-2014:50Tatiana Kondratevichdo you mean creating custom api and linking them to lambda?#2021-08-2016:54Jakub Holý (HolyJak)I'm not sure what I mean, it's a while since I read how to connect ions with websockets to the outside. Good luck!#2021-08-2211:26husaynHello! Does anyone using datomic analytics with presto + metabase have a working solution group by time based attributes?
Pretty much an answer for this question https://forum.datomic.com/t/time-and-histories-in-analytics/1717#2021-08-2213:37florinbraghisHello! Does dev-local support transaction functions ? The documentation that I’ve managed to find is not clear if they are or not I’m trying to install a function via :db/fn , but it fails with ” Unable to resolve entity: :db/fn”..#2021-08-2306:39Jakub Holý (HolyJak)I believe most of https://github.com/cognitect-labs/day-of-datomic-cloud works locally using dev-local but the exercises with tx fns require that you actually deploy to Ions so I guess the answer is no. But I might have got some things wrong...#2021-08-2309:00onetomi think u can use tx fns, but u don't have to transact them, just simply have them on the classpath.
u might need to have a datomic/ion-config.edn with an :allow declaration to enable the use of your function in transactions:
https://docs.datomic.com/cloud/ions/ions-reference.html#ion-config
but not sure if that's necessary, when using dev-local.#2021-08-2319:23Joe Lane@U0522TWDA No need to deploy ions to get transaction functions, you can use them with dev-local.{:tag :div, :attrs {:class "message-reaction", :title "heart"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("❤️")} " 2")}
#2021-08-2309:26onetomHas anyone used :db.type/uri Datomic attribute value type in their application?
I wondering if it would make sense to store email addresses, using this type (and create them using (.URI. "mailto:
Was there any benefits over just using strings?
(I've enquired about this here too https://forum.datomic.com/t/when-to-use-db-type-uri/1932 so u can answer there too){:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 3")}
#2021-08-2323:12KevanHey all, I'm going through the tutorial and I noticed an issue with one of the examples early on. I'm not sure about the best place to report or create a fix.#2021-08-2323:24jaretPop me a link and I will fix it.#2021-08-2323:46husaynhello @jeroen.dejong I found this in the archives
thumbnail: Datomic Analytics is really cool! Is it possible to expose created-at/updated-at like attributes on the tables?
Just curious as to what solution you ended up going with#2021-08-2405:43thumbnailThe most feasible solution would be to manually keep 2 attributes in sync (possible with a dbfn). I had trouble exposing the transactions themselves#2021-08-2421:15husaynI really didn’t want to have to do that 😭. Thanks, I’ll try that#2021-08-2510:39joshkhhello everyone. i'm getting an exception when i try to import-cloud using dev-local 0.9.235 (and previous versions as well). i think this started happening after we upgraded datomic cloud to 884-9095. any ideas? thanks!
(dl/import-cloud {:source {:server-type :ion,
:system "my-system-name",
:query-group "my-query-group",
:region "the-region",
:endpoint "",
:db-name "the-cloud-db"},
:dest {:server-type :dev-local,
:system "dev",
:db-name "some-local-db"}})
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
Could not marshal response: Not supported: class java.lang.IllegalStateException
nrepl.middleware.interruptible-eval/evaluate/fn interruptible_eval.clj: 91
clojure.core/eval core.clj: 3214
...
datomic.migrator.cloner/eval19323 REPL Input
datomic.dev-local.impl/import-cloud impl.clj: 515
datomic.dev-local.btindex-db/import-log btindex_db.clj: 574
datomic.dev-local.btindex-db/import-t0 btindex_db.clj: 565
clojure.core/first core.clj: 55
...
datomic.client.api.sync/unchunk-iterable/reify/iterator sync.clj: 62
datomic.client.api.sync/unchunk-iterable/reify/next-chunk-iter! sync.clj: 60
datomic.client.api.async/ares async.clj: 58
clojure.lang.ExceptionInfo: Could not marshal response: Not supported: class java.lang.IllegalStateException
cognitect.anomalies/category: :cognitect.anomalies/fault
cognitect.anomalies/message: "Could not marshal response: Not supported: class java.lang.IllegalStateException"#2021-08-2510:42joshkh^ this doesn't happen if i import databases that i created very recently. just "old" ones.#2021-08-2511:46joshkhhmm, and it's not just dev-local. i'm having similar problems with datomic.client.api/tx-range
(seq (d/tx-range conn {:start 1 :end 100}))
Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
Could not marshal response: Not supported: class java.lang.IllegalStateException#2021-08-2514:07jaretWhat version of Java are you using? And what version of client?#2021-08-2514:47joshkhhi Jaret, versions here:
{com.datomic/client-cloud {:mvn/version "0.8.113"}
com.datomic/dev-local {:mvn/version "0.9.235"}
com.datomic/ion {:mvn/version "0.9.50"}
com.datomic/ion-dev {:mvn/version "0.9.290"}}
openjdk 16.0.1 2021-04-20
OpenJDK Runtime Environment (build 16.0.1+9-24)
OpenJDK 64-Bit Server VM (build 16.0.1+9-24, mixed mode, sharing)
#2021-08-2515:02joshkhperhaps it's worth mentioning that we also see this exception when running from inside a container built on openjdk:8-buster#2021-08-2519:00joshkh(i've opened a support ticket with more details 🙂){:tag :div, :attrs {:class "message-reaction", :title "pray"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙏")} " 3")}
#2021-08-2519:01joshkhspeaking of which, i always feel bad opening a ticket without monospace/codeblock support. sorry in advance.#2021-08-2519:01jaretIt is my cross to bear! It's not your fault. A sadness created by zendesk 😞#2021-08-2519:02joshkhas someone on the receiving end of zendesk tickets, i feel your pain#2021-08-2514:05babardo👋 Datomic cloud question here:
We need to call a component protected by ip filtering from Datomic ion instances.
Is there a way to associate elastic ips to these instances ?#2021-08-2517:06Jakub Holý (HolyJak)I believe Ion functions run on a query group instances and I suppose those are created based on a https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchTemplates.html. Perhaps there you can ask for an EIP to be assinged?#2021-08-2517:08Jakub Holý (HolyJak)Though those instances likely run in a private VPC and have to go through a https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html. Wouldn't fixing the IP of the gateway suffice?{:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 3")}
#2021-08-2618:00emccueIf I wanted to get all the transactions associated with an entity, what would that look like?#2021-08-2618:00emccue(defn all-transactions-for-entity [connection entity]
(d/q {:query '[:find (pull ?transaction [*])
:in $ ?e
:where [?e _ _ ?transaction]]
:args [(d/db connection) (:db/id entity)]}))#2021-08-2618:01emccueright now this is my code but it is only returning 1 transaction#2021-08-2618:08favilaWhat are you expecting? 1 is a possible answer#2021-08-2618:08emccue2#2021-08-2618:09favilaCould you be more precise about what you mean by “associated with an entity”?#2021-08-2618:10emccueI have this function, which inserts an entity and returns all the stuff about it#2021-08-2618:10emccue(defn insert-and-return [db entity]
(let [temp-id (str (UUID/randomUUID))
entity-to-insert (update entity :db/id (fn [id]
(if (nil? id)
temp-id
id)))
transact-result (transact db
{:tx-data [entity-to-insert]})]
(d/pull (:db-after transact-result)
'[*]
(or (:db/id entity) ((:tempids transact-result) temp-id)))))#2021-08-2618:10emccue(def e (insert-and-return connection {:payment-request/price-cents 123}))
=> #'l.datomic/e
e
=> {:db/id 83562883711071, :payment-request/price-cents 123}#2021-08-2618:10emccuethen i update this entity#2021-08-2618:11emccue(def e2 (insert-and-return connection (assoc e :payment-request/price-cents 2345234)))
=> #'l.datomic/e2
e2
=> {:db/id 83562883711071, :payment-request/price-cents 2345234}#2021-08-2618:11emccueso there should be two transactions associated with that :db/id#2021-08-2618:11ghadiunrelated to your question, always pass the db to a query function, not a connection#2021-08-2618:12ghadiif you pass the db , you can make a complicated report/query by asking several questions of the same db value#2021-08-2618:13ghadi(by making several function that all take a db , the call them all with the same db arg)#2021-08-2618:14favilaI think what you mean is “a datom with a matching E was in a TX” If so, then use a history db instead of a normal db and your query should work. The db you use has only currently-asserted datoms in it.#2021-08-2618:15favilaBut to reiterate what Ghadi said, passing “connection” around is an antipatern#2021-08-2618:15emccueokay, so what is a history db#2021-08-2618:15favila(d/history db) => A database with all datoms in it#2021-08-2618:15emccue(defn all-transactions-for-entity [db entity]
(d/q {:query '[:find (pull ?transaction [*])
:in $ ?e
:where [?e _ _ ?transaction]]
:args [db (:db/id entity)]}))#2021-08-2618:16emccueokay so i updated it to take the db#2021-08-2618:16emccue(all-transactions-for-entity (d/history (d/db connection)) e2)
Execution error (IllegalStateException) at datomic.core.pull/pull* (pull.clj:364).
Can't pull from history
#2021-08-2618:16favilaah, I forgot about that. You will need to pass two dbs, and make a decision about the moment-in-time value of the tx you pull#2021-08-2618:17favila(defn all-transactions-for-entity [db entity]
(d/q {:query '[:find (pull ?transaction [*])
:in $ $h ?e
:where [$h ?e _ _ ?transaction]]
:args [db (d/history db) (:db/id entity)]}))
#2021-08-2618:18emccueokay that did it, but i do not understand what is happening#2021-08-2618:18favilahttps://docs.datomic.com/cloud/tutorial/history.html#history-query#2021-08-2618:19emccueokay so i understand that (d/db ...) gets me a logical snapshot of the db#2021-08-2618:19favilaYou were querying all datoms in a history db (i.e., including retractions) and collecting their TX; then you were projecting the tx entities into maps at the moment-in-time of db#2021-08-2618:23emccueso i cannot use pull in the history db?#2021-08-2618:23emccuebecause I would be getting retractions as well as current values?#2021-08-2618:23ghadipull projects an entity at a particular point in time (db), and a history db includes all points in time#2021-08-2618:25emccueokay - so what if i wanted to see the current value of the entity at the time of each one of these transactions#2021-08-2618:25favilaUse as-of on the database#2021-08-2618:26favilahttps://docs.datomic.com/cloud/tutorial/history.html#as-of-query#2021-08-2618:29emccueokay small extension to that - what if i wanted to get all entities affected by a transaction#2021-08-2618:36favilahttps://docs.datomic.com/cloud/time/log.html#2021-08-2618:36favilaIt’s a different index#2021-08-2713:40hden> what if i wanted to get all entities affected by a transaction
d/tx-range is the index to go.
For queries, this might work as well. (see concerns below↓)
[:find ?e
:in $ ?tx
:where [?e _ _ ?tx]]
#2021-08-2713:49favilaThat will be a full scan of the entire index. I’m not sure datomic will even let you issue that query#2021-08-2618:16Ivar RefsdalHi. Is there any simple way to count the total number of datoms in a database?#2021-08-2618:16ghadion-prem or cloud?#2021-08-2618:16Ivar Refsdalon-prem#2021-08-2618:59ghadinot sure if the db-stats function exists in on-prem#2021-08-2618:59ghadibut it does in cloud#2021-08-2618:59ghadishould be O(1)#2021-08-2620:11favilait does not. The transactor reports metrics “Datoms” and “IndexDatoms”. https://docs.datomic.com/on-prem/operation/monitoring.html#2021-08-2620:11favilathat’s the only way I know of to get a quick count in on-prem#2021-08-2708:41Jakub Holý (HolyJak)Hi! If I remember correctly, datalog has some way to traverse references in reverse, using _ in the attribute name, is that correct? I could not find where in the docs it is explained.
I see it e.g. in the https://github.com/cognitect-labs/day-of-datomic-cloud/blob/385438c4d983d9855bf40d83eaabb618048a7cfc/tutorial/query_tour.clj#L75. And my experiment seems to prove my suspicion that using attribute names (even non-ref) starting with _ will lead to weird errors. Yet I do not see that mentioned https://docs.datomic.com/on-prem/schema/schema.html#required-schema-attributes?!#2021-08-2709:45schmeeit’s (briefly) mentioned here: https://docs.datomic.com/on-prem/query/pull.html#reverse-lookup#2021-08-2710:10Jakub Holý (HolyJak)Thank you! It would be nice if the docs mentioned that attribute local names must not start with _ ...#2021-08-2708:58Jakub Holý (HolyJak)Q2: I need to write a function that returns a query for a later execution. The challenge is that I need to hardcode the argument it gets - a set of IDs - into the query itself. In SQL I would do (fn [ids] (str "SELECT * FROM TableX WHERE id IN (" (s/join "," ids) ")")) So far I have
(defn by-ids [ids]
'[:find (pull ?e [*])
:where [?e :reference/$id ?id]
:in $ [?id ...]]) ; FIXME how to include the provided ids in the query?
I am sure it is trivial but I just know too little... Any help is much appreciated!
Perhaps do [(#{ids} ?id)] and drop the input? <-- worked
Update: As per @schmee suggestion, I ended up with [... :where '[?e :reference/$id ?id] [(ground ids) '[?id ...]]] {:tag :div, :attrs {:class "message-reaction", :title "white_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✅")} " 8")}
#2021-08-2709:48schmeeI believe that the right way to do this is to use multiple inputs: https://docs.datomic.com/on-prem/query/query.html#multiple-inputs#2021-08-2709:50schmeealso this: https://docs.datomic.com/on-prem/query/query.html#relation-binding#2021-08-2710:08Jakub Holý (HolyJak)Multiple inputs would work if I had the input available when I am executing the query (since it is d/q that takes the arguments) but that is not the case, I need to bind this input when I create the query (as data). I.e. my function takes the ids as an argument and must return a query.#2021-08-2710:14schmeehmm… maybe https://docs.datomic.com/on-prem/query/query.html#ground can be helpful here?{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 7")}
#2021-08-2710:21Jakub Holý (HolyJak)Thank you! That indeed works, and I expect it is more performant than using a fn#2021-08-2710:22Jakub Holý (HolyJak)I would not discover this without you!#2021-08-2710:22schmeehappy to help 😄#2021-08-2710:16Jakub Holý (HolyJak)What is the right way to write the query "Give me the references that have the given component-id as either their source or target"? One way I guess is
[:find '?e :where ['?t :component/id component-id], '[?e ?attr ?t] '[(ground #{:reference/source :reference/target}) [?attr ...])]
another would be
[:find '?e :where '(or (and [?e :reference/source ?t] ['?t :component/$id component-id])
(and [?e :reference/target ?t] ['?t :component/$id component-id]))]
though I do not like the duplication in the latter 🙏#2021-08-2713:46hden[:find ?e
:in $ ?id
:where
[?c :component/id ?id]
(or [?e :reference/attr ?c]
[?c :reference/attr ?e])]
If you are looking for graph traversal, see:
https://forum.datomic.com/t/how-to-do-graph-traversal-with-rules/132#2021-08-2811:07pkovaif I want to upsert an entity with a tuple key that contains a db.type/ref, it seems like I can't use a lookup ref in the transaction, otherwise I get Invalid tuple value#2021-08-2811:07pkovais this how it works or am I doing something wrong?#2021-08-2816:03lassemaattaare there any good guides (or best practices) on how to structure an application around datomic? I'm taking my first steps in learning datomic, and at the moment I'm trying to wrap my head around entities and how they should interact with business logic#2021-08-2919:58Ivar RefsdalBusiness logic with pure data sounds good to me, but I'm no expert.
"... Datomic is able to query any data strucutre [sic] that consists of a list of lists."
@U0MKRS1FX
;; This is what an actual Datomic DB actually looks like.
(def my-mock-db [
[123 :name "Test"]
[123 :email "
via
https://augustl.com/blog/2013/find_by_id_in_datomic/{:tag :div, :attrs {:class "message-reaction", :title "partying_face"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🥳")} " 2")}
#2021-08-2922:23favilaAre you using on-prem? If so I recommend against using d/entity (entity maps) for anything load-bearing. It’s fine and convenient for tests or repl use but elsewhere it is an unpredictable source of io, and it’s very easy to lose track of what the data dependencies are over time (that’s it’s power and it’s liability). Also people seem continually surprised by its equality semantics#2021-08-2922:25favilaUse pull expressions and project your data into maps that are the same shape as your data (or as near as you can manage)#2021-08-2922:26favilaKeep keyworded attributes the same “type” in your app and your database, so they obey the same attr predicate. (Doubly so if you use spec)#2021-08-2922:28favilaFor extra expressive power, you can have functions declare the pull expression shape of the data they expect (transitively) and use this info to compose them into larger expressions#2021-08-2922:31favilaAlso remember that entities are not maps. Expect and embrace the asymmetry between reads and writes. Try to frame writes as datafied commands in their own right that produce (or are) tx data with db/ensure, tx fns, etc to provide integrity#2021-08-2922:36favilaFor querying, try encapsulating “business logic” query concepts as named rules so they can be recomposed into larger queries (rules can be polymorphic! Just define the same rule name multiple times)#2021-08-2922:37favilaDo be aware of performance though. Datomic doesn’t do clause reordering, which is really unfortunate and dangerous for rules#2021-08-2922:38favilaAlso keep query concerns to “find the matching entities” and pass in pull expressions for “what I want from those entities” as a parameter (or just use pull-many)#2021-08-2922:39favila(By analogy, in sql terms you want to separate the “select” from the rest of the query)#2021-08-3003:54lassemaattaThanks, great ideas 👍#2021-08-3110:18augustlI'm a bit torn here. Since the whole db is represented as an immutable value, I kind of also like the idea of just passing the db around and have various functions extract what they need directly from the db. This avoids having to create a separate mapping view of what's in the db, as well as having to know "outside" what needs to be pre-fetched from the db into maps, etc. And for tests, it's trivial to set up a db that contains what you need using with{:tag :div, :attrs {:class "message-reaction", :title "heart"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("❤️")} " 1")}
#2021-08-3110:18augustlI'm not convinced that "hiding" the db adds all that much value, really#2021-08-3110:21lassemaattawhat about leveraging spec/schema to generatively test functions? I would imagine that it's not trivial to generate data for functions which expect entities?#2021-08-3110:25augustlthat's a good point! I've only used generative tests for parts of my code that doesn't know about datomic dbs, and I've also not really used spec/schema either#2021-08-3110:25lassemaatta(also, one argument (in favor of entities) I've heard is that it's a lot easier to accidentally write a really slow query that makes your prod environment slow to a crawl as opposed to pay a smaller performance penalty all the time when traversing between entities)#2021-08-3110:26augustlI suppose that the code I'm talking about that reads in a datomic db, is the part of my code that returns a plain map that I use for rendering GUIs etc 🙂#2021-08-3110:28augustlI tend to allow all that code that sits "between" the db and the rendering to access the full db, and that creating plain maps first and then processing plain maps to generate the GUI data is not something I tend to do a lot#2021-08-3110:28augustlshould be noted, my context is datascript + frontend, not actually datomic per se#2021-08-2922:17Brandon OlivierI’m running through the tutorial for datomic ions and I’m having an issue connecting to my CloudFormation stack.
First off, there’s no ClientApiGatewayEndpoint in my output. I found a url listed somewhere else, that I think is appropriate, but then I get a nodename nor servname provided or not known error. I can’t seem to find any info about this online either#2021-08-3004:20jarrodctaylorIn CloudFormation did you end up with three stacks with a status of CREATE_COMPLETE? You should see ClientApiGatewayEndpoint in both the stack with your system name and the Compute stack.#2021-08-3007:51lambdamHello everyone,
Is it possible to get the result of a datomic query as a stream of results?
My use case would be applying a transducer on very large sets of results without realizing the whole result set in memory.
Thanks#2021-08-3007:53Linus Ericssonin short, no. the query result is realised as a hashset. Can you break down the query into several steps? It would then be possible to make partial reads of the result while working on it.#2021-08-3008:01lambdamThanks for the answer.
I can break it into several pieces but it would increase significantly the complexity of the operation.
My primary goal is to test entities against some core.specs in a database wide manner to ensure data integrity through time.#2021-08-3008:02Linus EricssonIs this a one-time operation or something you strive to do often? Because if you need to do it often, chances are that you need to reconsider that approach.#2021-08-3008:09lambdamBoth.
I do it at dev time but I'm considering integrating this approach for data integrity in a test suite.#2021-08-3008:11Linus EricssonIf you can adopt your validation logic to just check what has been changed, you will be able to use this approach. Otherwise you have to make pre-validations or similar for every transaction hitting the database.
If not, the checks will soon be quite hard to scale.#2021-08-3008:12Linus Ericsson(Sorry to sound negative, I tried this some years ago, and it works quite well for limited test cases, but, IMO, it is not so helpful when the database grows and the integrity checks takes longer and longer time)#2021-08-3009:38lambdamThanks for the feedback.
I do it already at transaction time but the business domain is so variable from one client to another, that the data is evolving very rapidly.
That would be convenient to back check the data already recorded.#2021-08-3009:40Linus EricssonWell, don't take my words for truth. I think it sounds like a reasonable use case. Maybe you can spin up some large cloud instance to do this when nescessary and have the whole database in memory.#2021-08-3010:17lambdamThanks.
For the time being, that data set is still small and will remain for at least one or two years processable on a laptop.
Next step is indeed manageable with big instances with a lot of memory, but this is a workaround that introduces complexity in the architecture. Stream processing would push further this scaling problem.#2021-08-3010:33favilaYou can use d/qseq to pull lazy, but the result set itself is still eager{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 4")}
#2021-08-3011:40lambdamGreat info. Thanks.#2021-08-3012:30lambdamIf I understand well qseq, this piece of code would
1. Realize eagerly the set of integers that satisfy the query
2. Then get the required pull fields one by one, apply the xform and step by step calculate the mean of the max rating.
(->> (d/qseq {:query '[:find [(pull ?user [:user/ratings]) ...]
:in $ ?survey
:where [?survey :survey/user ?user]]
:args [(get-current-db!) 1234]})
(transduce
(comp
(filter seq)
(map #(->> % :user/ratings (apply max))))
net.cgrand.xforms.rfs/avg))
One question: after pulling and processing an element of this lazy seq, is it garbage collected (stream like) or is it maintained into memory until the whole seq is consumed?
Another way of asking this is: after realizing the integer set, is the memory consumption constant or is it growing on every step of the lazy seq?#2021-08-3013:29favila(What’s the (filter seq) for?)#2021-08-3013:29favilaI don’t know for sure, but this feature would be pointless if the query result weren’t truly lazy#2021-08-3013:30favilaI would expect at least the pulls to be lazily realized and released even if the backing entity id list is not, but I see no reason why that can’t be lazily released also#2021-08-3013:31favilabut the query evaluation itself is not lazy, and the entire result (sans pull) is also fully realized by the time qseq returns, that much I do know#2021-08-3014:23Joe Lane@U94V75LDV
• How large is very large sets of results?
• Am I interpreting the scenario correctly in my next statement? "For a given survey (`1234`) find the maximum rating for every user associated with that survey (defined as maxes) and then find the mean of maxes "
• Is this for a report, an analytics scenario, or other batch job?#2021-08-3015:17lambdam@U0CJ19XAM
1. For the time being, the set of results is quite small. But since my business domain is in academic, every year the data is fresh and aggregates. So within a few years, I suppose that the data sets might not fit into a laptop memory.
2. Yes, it is the meaning of the piece of code. The idea was having a pull expression inside of the :find clause AND having a "simple non trivial" xform in the transducer.
3. The code example is not linked to the project I'm working on. As said in 1., the domain is in academics. The code example was only for the example. My use case though is more having tests for the integrity of the formats of the entities. As an example, I have evaluation entities. They tend to have various forms depending on the context and also they tend to evolve quite a lot through the initial phase (I'm discovering the business domain). What I'd like to have is a test suite to check that the entities match the core.specs that I have in my code AND that the refs correspond to the entity type that it is supposed to refer to. I already have thousands (if not tens of thousands) of evaluation entities. I therefore fear that within a few years, I won't be able to load eagerly the whole collection of evaluations to check their shapes (~ their core.specs).#2021-08-3015:27Joe LaneI see. So this is primarily about enforcing consistency? Have you seen https://docs.datomic.com/on-prem/schema/schema.html#entity-specs before? Many entity predicates can be run per transaction. May help to ensure consistency on the way into the system.
As for ensuring consistency of entities already in the database, you could leverage a nested pull pattern with index-pull to lazily walk through evaluation entities and their ref entities and check each entity against it's spec.#2021-08-3015:29lambdam@U09R86PA4 the (filter seq) is a shorter equivalent of (filter #(-> % empty? not)) .
I just figured that I could have written (filter not-empty) . But I tend to use #(-> % empty? not) in my projects.#2021-08-3015:47favilaDo you expect more than a thousand user ratings? perhaps a subquery would be better#2021-08-3016:14favila[:find (avg ?max-rating) .
:in $ ?survey
:where
[?survey :survey/user ?user]
[(datomic.api/datoms $ :aevt :user/ratings ?user) ?user-ratings]
[(apply max-key :v ?user-ratings) [_ _ ?max-rating]]]
might work. Or just go for broke and make the whole thing out of d/datoms#2021-08-3016:17favilaor use datomic-analytics for aggregations. Presto’s query planner is just much better than datomic datalog’s at performing aggregations in bounded memory. I’m continually astonished at how fast it is, especially considering it’s using the client api. Unfortunately you lose all history or as-of features{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-08-3016:17favilaAlso, sql 😕#2021-08-3016:30favilaI think this would be the equivalent with just datoms. This will not necessarily be the fastest, but it will consume a small and constant amount of memory no matter how big your data gets. You also may not want to hand-write one of these for every query you have in mind…#2021-08-3016:30favila(let [db db]
(transduce
(keep #(transduce
(map :v)
net.cgrand.xforms.rfs/avg
(d/datoms db :aevt :user/ratings (:v %))))
net.cgrand.xforms.rfs/avg
(d/datoms db :aevt :survey/user survey)))#2021-08-3016:31favilaThis is the opposite extreme. d/q is realize+parallelize everything, this is lazy-everything and keep no intermediaries.{:tag :div, :attrs {:class "message-reaction", :title "ok_hand"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👌")} " 1")}
#2021-08-3017:10lambdamWaa, that is a lot of very interesting information. Thanks a lot.
You're right, the datoms API should be sufficient for my needs and match the low memory requirements for very large datasets.#2021-08-3017:19lambdamAlso
> Do you expect more than a thousand user ratings? perhaps a subquery would be better
The rating thing was for the example. My project is in the academic world.
For the evaluations (of various kinds), I expect to have soon hundreds of thousand entities. May be a million within a year or two. But that is the only case in the architecture.#2021-08-3017:21favilaI mentioned that because pull expressions have a default limit of 1000 for cardinality-many{:tag :div, :attrs {:class "message-reaction", :title "ok_hand"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👌")} " 1")}
#2021-08-3017:21favilahttps://docs.datomic.com/on-prem/query/pull.html#limit-option#2021-08-3017:08Ben HammondI have a datomic Ion that is only ever going to be called via HTTP (it handles an AWS Cognito callback)
what are the pros/cons of Lambda vs HttpDirect for this situation?
Does it ever make sense to implement as a Lambda if it will never be called from any other part of aws?
Should it always be HTTP_Direct? I am using pedestal to serve the http-direct; I presume there aren't any overhead issues there?#2021-08-3017:09Ben Hammondis this just an irrelevant issue and I should stop procrastinating?#2021-08-3017:10Ben Hammond(bike sheds should always be racing green, IMHO)#2021-08-3021:02Daniel JompheDatomic Cloud Backup and Restore{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-08-3021:02Daniel JompheHi! From what I gather, there is no official recipe.
Sources:
1. https://forum.datomic.com/t/cloud-backups-recovery/370: Cognitect mentions S3 is very durable, but doesn't say if we can just copy S3 data from one stack to another, and expect that to work as a restore (what about DDB, EFS, valcache?)
2. https://forum.datomic.com/t/cloud-backups/1713: Cognitect didn't respond.
3. https://docs.datomic.com/cloud/operation/operation.html: Documentation doesn't mention backup and restore.
4. https://ask.datomic.com/index.php/431/cloud-backup-and-recovery?show=431#q431: No answer in the new Ask forum, but I read we can vote on the request and Cognitect will see that as an interesting signal.#2021-08-3021:06Daniel JompheAnd there is an unofficial recipe coming out of a capable open source developer, @U0CKQ19AQ :
1. https://github.com/fulcrologic/datomic-cloud-backup
Some people shared interesting feedback when Tony announced the lib but I don't recall it.
1. https://clojurians.slack.com/archives/C06MAR553/p1628800699143500: Announcement
2. https://clojurians.slack.com/archives/C06MAR553/p1629067074184300: Update#2021-08-3021:07Daniel JompheI'd like to ask if some good practices were shared with some details about proven recipes?
Our wants, going into production soon, are:
• Recovering from very bad days (e.g. someone deletes the raw data in DDB or S3 or executes d/delete-database).
• Copying data into staging.
• Being able to migrate to another AWS region.
So I'd like to explore official/bare solutions and what Tony did too. I see them as complementary. But I wonder if anyone has found any guidance about any bare solution that would not involve streaming and replaying transactions?#2021-08-3104:08tatutafaict there is no official solution (yet at least) for offsite backups for datomic cloud#2021-08-3115:00tony.kay@U0514DPR7 I’m actually doing it two ways. The streaming way is the only way to really get a perfect snapshot of Cloud as far as I know. There may be a way to copy the Datomic underlying stuff, and pay Cognitect to manually stand it up elsewhere, but there is definitely not documented way, as you’ve found. So, streaming is the public API way.
The other thing I’m doing (and you can do) is code that can analyze the dependency graph of a subset of data (e.g. a customer) and emit that as a snapshot of the “db now”. That is way harder, because it is pretty easy for a database to be “too well connected” where you get a lot more than you wanted….and when you start filtering that out, you often then run into get too little.
If you want tp actually recovery a db from a “bad day”, I don’t see an alternative (until Cognitect makes one).#2021-08-3115:04Daniel JompheThanks to the both of you!#2021-08-3115:05Daniel Jomphe@U0CKQ19AQ then I think I will start experimenting with the tool you published, and develop internal enablers around it for our team.#2021-08-3115:06tony.kayYeah, I’m still working on it. The most recent version should work, but I have not had time to get a full successful restore yet…just FYI#2021-08-3115:08Daniel JompheGood to know! 🙂 So would it be better to wait some time before trying it out and starting to provide feedback?#2021-08-3115:08tony.kayThe backup looks fine, though, and it isn’t hard to have a thread running in prod that keeps that up to date. The restore had a bug or two in it, and I think I’ve fixed them all, but I tore down my prod restore system to save $ until I had time to try it again. Running a small target cluster, the restore was going to take 14 days to start (58M transactions)
No need to wait, no. I’d love to have help on it, which is why I published it#2021-08-3115:09tony.kayLike I said, I think it should actually be working at this point…other that you writing scaffolding around it to deploy it#2021-08-3115:10Daniel JompheOk, good - then I should start trying it out. As for the eventual feedback, where would be best for you?#2021-08-3115:11tony.kayIssues on the GH project is fine. You can also just DM me about it.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-08-3115:14tony.kayMy restore loop that I’m using (and have tested fairly well) is just:
(defn do-restore-step [dbname redis-conn]
(let [conn (connection dbname)
start-key (str "next-segment-start" dbname)
next-start-t (car/wcar redis-conn
(car/get start-key))
start-t (if (int? next-start-t) next-start-t 0)]
(log/debug "Restoring" dbname "segment starting at" start-t)
(let [{latest-start-t :start-t} (dcbp/last-segment-info dbstore dbname)
next-start (if (or (nil? latest-start-t) (< latest-start-t start-t))
(do
(log/debugf "Segment starting at %d is not available yet for %s." start-t dbname)
start-t)
(cloning/restore-segment! dbname conn dbstore idmapper start-t {:blacklist #{:invoice/dian-response :invoice/dian-xml}}))]
(if (= next-start start-t)
(do
(log/debug "There was nothing new to restore on" dbname)
(Thread/sleep 5000))
(do
(log/info "Restored through" dbname next-start)
(car/wcar redis-conn
(car/set start-key (car/freeze next-start))))))))
where I’m using Redis to track the remappings and where I last tried to restore. If you run a single node in the “restore target” that runs this code, then you should be fine…otherwise you’d have to do some kind of leader selection algorithm to make sure only one restore thread ever runs at once.#2021-08-3115:18tony.kayI think there are things you could technically do to your database that could cause problems (renaming an ident), so be aware of how it works.#2021-08-3115:24Daniel Jompheoh, that's right, thanks for the warning!#2021-08-3115:29JohnJCurious, what are your use cases for datomic that you have to put up with so much uncertainty(that they never respond to or document) from the company like this?{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-08-3119:46souenzzoI would love to hear a statement from the official team. Already asked many times. 😞{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 1")}
#2021-09-0118:12stuarthalloway@U0514DPR7 Thanks for collecting the list of forum posts, etc. where related topics have come up before. I will visit each one and make sure that they are answered.{:tag :div, :attrs {:class "message-reaction", :title "slightly_smiling_face"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙂")} " 2")}
#2021-09-0118:16stuarthalloway@U0CKQ19AQ Thanks for writing https://github.com/fulcrologic/datomic-cloud-backup. As a streaming ETL, it will probably be adaptable to many uses other than just DR. After I finish reviewing all the points on this thread, I will get back to you with more help on checking the DR box.#2021-09-0118:17stuarthalloway@U2J4FRT2T I am very sorry that we have failed to answer questions that you have asked repeatedly. I will be auditing our process for monitoring the various forums to make sure that doesn't happen again. ... After I answer the questions. 🙂#2021-09-0118:48stuarthalloway@U0514DPR7 Having revisited those old threads, I think it would be easier to follow along if I just start fresh, which I have done via https://clojurians.slack.com/archives/C03RZMDSH/p1630520862101800 in slack. Please let me know if this covers your questions.#2021-09-0118:53tony.kay@stuarthalloway Yeah, I'm still struggling with that, and have opened an official Datomic Cloud support ticket around backups in general. I can get the backup, but the restore process still doesn't work as written, and I see various little issues with the way I'm doing it (there is no way to easily do two-phase commit so that I'm sure I saved the tempid rewrites after the transaction...because there is no way to back out the transaction...I need to pre-allocated the :db/id, but not sure if I can do that. Anyway...I ususally get about 600k transactions restored and it crashes for some reason or other (have not diagnosed it yet...def a bug in my code), but since that takes a few hours to blow up, it's really painfully slow to test...I mean, I'm testing it in the small and it's fine...it only manages to hit the problems after several hours.
So, an official solution would be awesome.#2021-09-0119:00stuarthalloway@U0CKQ19AQ Official solution is definitely where I want to end up. You should not need two phase commit. If you lose your tempid mapping, you should be able to recover it by comparing the matching transactions in the old system and the new.#2021-09-0119:02stuarthallowayOr (even more cautious) I have seen some flavors of this that save the source system entity id as an extra attribute/value on every entity in the target system.{:tag :div, :attrs {:class "message-reaction", :title "bulb"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💡")} " 2")}
#2021-09-0119:12Daniel Jomphe@stuarthalloway Thanks a lot for following up!
A follow-up FAQ would be why and what made it sensible or reasonable to not publish official solutions until now.
Apart from lack of time or resources, I expect you'll be able to tell us that when you helped some Cloud users, you often took solution X and they found it sufficient for their needs. What was this X?
Was X a log replay, or copying with decrypt re-encrypt at the storage level?#2021-09-0119:14stuarthallowayCloud users helped themselves and shared their experiences with us. (Like @U0CKQ19AQ but in most cases less public.)#2021-09-0119:15Daniel JompheI'm asking that because it might help us weigh our options.
We're more into trying to make sure we tackle the most potent problem first, before deciding on a specific solution.#2021-09-0119:15stuarthallowayX is always some flavor of log replay, because that is what is possible in user space.#2021-09-0119:16stuarthallowayAnd log replay will continue to be valuable even in a world with full-copy backup/restore, because it lets you do things like filter or split databases.#2021-09-0119:16tony.kayright, would love to see a more full-featured version of my quick hack library 🙂#2021-09-0119:17tony.kayjust don't have time to do it myself#2021-09-0119:17Daniel JompheIncredible, Stu (coming from a fan of clojure datomic since 2009).
Even though it's somehow a bit weird to not yet have an official tool, this speaks to the empowerment that your core tools give to the community that they could help themselves in such a way.#2021-09-0119:18stuarthallowayWe are always listening to our users to prioritize what we do next. Cloud backup/restore has been high on the list for a long time but never at the very top.#2021-09-0119:20stuarthalloway@U0514DPR7 I don't know your requirements, but across a broad set of possible requirements log replay is the only user space option. Meanwhile I will be working on the options in product space!#2021-09-0119:22Daniel JompheYour recent redesign to make it possible to pick your own size and price of Prod topology was very welcome indeed, making it practical for us to spin all our stacks using the same topology and tooling.{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-09-0119:22Daniel JompheThanks for clarifying our user space options, @stuarthalloway, this is definitely helpful.
One of our top requirements was: what can we do with the least effort that supports a fully functional restore in a regular deployment (not dev-local).#2021-09-0119:32tony.kayThe biggest problem with the streaming replication is the time it takes to get the initial restore replayed. I estimate our restore can easily "keep up" once it gets through the last 2 years of history, but that is going to take about 2 weeks with a t3 xlarge. That means if my restore ever gets screwed up, I have a business continuity issue that could cost me 2 weeks or more of time. That is pretty unacceptable.{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 2")}
#2021-09-0119:33tony.kaySo a fast backup option that could be run regularly (or continuously via s3 replication or something) would really be ideal @stuarthalloway#2021-09-0119:36tony.kayAh, I see you responded to my support request. For those reading here: The s3 copy doesn't work "because of how Cloud uses AWS encryption."#2021-09-0119:41kennyWe have seen the exact same perf, using an almost identical process to your lib @U0CKQ19AQ. On top of the poor full restore performance, you also have the added eng & infra expense of needing to run and maintain this other process that continually backs up your Cloud system. This is absolutely worth is for DR purposes, but it's really just a band-aid to work around the lengthy full restore processes we're able to write in the user space.#2021-09-0119:43tony.kayWell, to be honest I like the idea that we can have up-to-the-minute streaming replication in user space; however, the initial restore time means I have to consider if I want two restores running in parallel just in case one gets screwed up so I don't have a 2-week exposure.'#2021-09-0119:51kennyFor a lot of businesses, I could definitely see the appeal, certainly when a full restore time is on that magnitude of time. Curious, if a full restore could be completed in 15 minutes, would the streaming replication process still be as valuable? i.e., is the streaming replication process solving the long full restore problem, having the most up-to-date backup, or something else?#2021-09-0119:51tony.kay@U0514DPR7 Not to speak for Stu, but every business has limited resources and time to provide features. AWS multi-region/multi-hardware means that the data is pretty darn safe. Yes, you can accidentally delete your data, so from a DR perspective we do have to eventually check that box...but as a business (Datomic, e.g.) I imagine you have to ask yourself, as a company, will people buy my product (D.Cloud) without having feature X or Y? If X is backups and Y is something that gets you more customers, which do you pick to do?#2021-09-0119:53tony.kay@U083D6HK9 In our case we handle financial data, so not losing anything is important. So being able to continuously back up is the real benefit. Continuous restore just reduces down-time if you have to use your backup...No large (atomic, transactional) database I'm aware of can restore the amount of data we have in 15 mins....unless you do streaming replication (which is what I've done in the past with postgresql){:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-09-0119:58Daniel Jomphe@U0CKQ19AQ as for us, we're so small for now; we'll go into production for the first time in a few months, with a relatively low volume of ERP-style transactions across few clients for the first year. And we're all learning clojure and datomic on the job. So for us, log replication is certainly quite fast until the DB grows, and your library as it is will probably work easily, and who knows if we'll even use it to rewrite our entire history with better schemas at some point?#2021-09-0119:59tony.kayso, I'm about to revamp the way the library deals with tempids...I never liked that part, and Stu gave me a good idea{:tag :div, :attrs {:class "message-reaction", :title "slightly_smiling_face"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙂")} " 2")}
#2021-08-3119:46souenzzoI would love to hear a statement from the official team. Already asked many times. 😞{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 1")}
#2021-09-0119:12Daniel Jomphe@stuarthalloway Thanks a lot for following up!
A follow-up FAQ would be why and what made it sensible or reasonable to not publish official solutions until now.
Apart from lack of time or resources, I expect you'll be able to tell us that when you helped some Cloud users, you often took solution X and they found it sufficient for their needs. What was this X?
Was X a log replay, or copying with decrypt re-encrypt at the storage level?#2021-08-3120:16Ben Hammondwhat is the current state of datomic-ions Java 11 support?
AWS Is tempting me with Java 11 (Corretto)
What are the chances of it working? I am running datomic version 884#2021-08-3120:59Joe Lanehttps://docs.datomic.com/cloud/changes.html#884-9095
> Upgrade: Compute nodes have been upgraded from Java 8 to Java 11.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-08-3121:05Daniel JompheYes, it works fine now!{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-09-0111:05Jakub Holý (HolyJak)What is the canonical way to check that an attribute value is in a particular set? Just using a set as a fn, as in:
[:find ?e :where [?e :person/favourite-colour ?val] [(#{:blue :green} ?val)]]
? 🙏#2021-09-0111:34schmeeI believe you can use ground for this too 😄#2021-09-0113:12Jakub Holý (HolyJak)Hi, thank you! But what if I want to express something like "find the entity (a person) is a descendant of #{Charlamagne, Leonardo da Vinci} and is married to one of #{Ghandi, M.L.King} ? I could use ground for both sets but is it not doing a cartesian product of values from both sets, effectively? I have no idea how it works underneath so maybe I do not make sense...#2021-09-0113:52favila(comment
;; Set filtering cannot be introspected by the query engine
;; This can be good if the set is large
;; and there's no index datomic could use
;; to retrieve matching datoms.
;; Evaluation cannot be parallel,
;; but the intermediate result set will be smaller
;; and none of the unification machinery will get involved.
;; As a literal:
[:find ?e
:where
[?e :person/favourite-colour ?val]
[(#{:blue :green} ?val)]]
;; As a parameter:
[:find ?e
:in $ ?allowed-val-set
:where
[?e :person/favourite-colour ?val]
[(contains? ?allowed-val-set ?val)]]
#{:green :blue}
;; Using unification
;; If you bind the items you are filtering by to a var
;; datalog will perform filtering implicitly via unification.
;; This is good if your filter value is indexed,
;; because now the query planner can see it
;; and possibly use better indexes or parallelize IO.
;; However, this may produce larger intermediate result sets
;; and consume more memory because of unification.
[:find ?e
:where
;; Could use an index
[(ground [:green :blue]) [?val ...]]
[?e :person/favourite-colour ?val]
]
[:find ?e
:where
;; Reverse clause order:
;; Now it *probably doesn't* use an index?
;; Depends on how smart the planner is.
;; Worst-case, it's as bad as a linear exhaustive
;; equality check of each val
;; which may or may not be worse than a hash-lookup
;; depending on the size of the set.
[?e :person/favourite-colour ?val]
[(ground [:green :blue]) [?val ...]]]
;; As a parameter:
[:find ?e
:in $ [?val ...]
:where
[?e :person/favourite-colour ?val]]
[:green :blue]
;; Use a rule with literals
;; In most cases this will be the same as the previous approach,
;; but without the "maybe"s because you don't need to trust the query planner.
;; This is the most explicit and predictable,
;; and definitely parallelizeable (rules inherently are).
;; But you *must* use literal values.
[:find ?e
:in $
:where
(or [?e :person/favourite-colour :green]
[?e :person/favourite-colour :blue])]
;; In any given case I would benchmark all three.
){:tag :div, :attrs {:class "message-reaction", :title "gratitude"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "gratitude", :src "https://emoji.slack-edge.com/T03RZGPFR/gratitude/f8716bb6fb7e5249.png"}, :content nil})} " 2")}
#2021-09-0113:53favilaSummary: There’s three different basic techniques, and they can have dramatically different perf depending on the situation#2021-09-0114:07Jakub Holý (HolyJak)thank you so much! You are a real treasure. I wish this information was available in the official docs...#2021-09-0115:13souenzzoUsing datomic on-prem in ~2018 I had a issue where i use a set as a parameter, if you send both query and parameters to a datomic function running on a real transactor (not memory), your set will turn into an array and it will throw, but only in production code.
;; As a parameter:
[:find ?e
:in $ ?allowed-val-set
:where
[?e :person/favourite-colour ?val]
[(contains? ?allowed-val-set ?val)]]
#{:green :blue}
#2021-09-0118:07favilaI think I’ve been doing this in prod for at least 5 years without problems#2021-09-0118:16souenzzo"send both query and parameters to a datomic function"
like
[... tx-data .. [:my-custom-db-fn [.. query ...] [.. args ..]]]
I used to have a db-fn that receive a query and args, run this query, and if it is not empty, it throws.
really convenient db-fn to solve race-conditions#2021-09-0118:23favilaAh I see. So sets turned to vectors (maybe arraylists)?#2021-09-0113:09Jakub Holý (HolyJak)When I Use :db/ident for Enumerationshttps://docs.datomic.com/on-prem/schema/schema-modeling.html#enums, the only way to enforce that a :db.type/ref attribute has only one of the values of the enum I want (imagine :color/green etc) is to install a :db.attr/preds on that attribute and a custom function = predicate that compares the supplied value against a hardcoded set. Correct?{:tag :div, :attrs {:class "message-reaction", :title "white_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✅")} " 2")}
#2021-09-0113:12favilaYou can’t really use :db.attr/preds because it receives the value and no db#2021-09-0113:13favilaYou need the more general :db/ensure mechanism#2021-09-0113:13favilaor, just trust the application to do the right thing?#2021-09-0113:15Jakub Holý (HolyJak)I could, if I hardcode the values in the predicate, no?
(defn color? [v] (contains? #{:color/green, ...} v))
Even if I had the DB I would not know how to "find all the defined colors" as I can hardly search for all idents starting with :color/ ?
Our experience is that the app breaks the trust and having multiple layers of checks is desirable 😅#2021-09-0113:17favilaThe value will be an entity ID, not a keyword#2021-09-0113:17favilathat the predicate receives#2021-09-0113:17favilaif you want to use a keyword type instead of an enum, then you can use a predicate#2021-09-0113:17Jakub Holý (HolyJak):man-facepalming:#2021-09-0113:18favilawhether to use a ref or a keyword type for enums is really a tradeoff. Being able to use db.attr/preds is one tradeoff#2021-09-0113:18Jakub Holý (HolyJak)I see. Thanks a lot for the clarifications!#2021-09-0113:18favilaothers are: can you represent the value as a keyword easily with d/pull?#2021-09-0113:19favila(keyword: yes, ident no)#2021-09-0113:19Jakub Holý (HolyJak)So what pros do these idents have?#2021-09-0113:19favilaidents are entities, so you can assert additional information on them{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-09-0113:20favilaand you get a VAET index of them#2021-09-0113:20favilaand because of the semantics of ident lookup you can rename them safely#2021-09-0113:21Jakub Holý (HolyJak)it would be awesome if the https://docs.datomic.com/on-prem/schema/schema-modeling.html#enums explained these things 🙏 (so people would stop bothering you with the same questions 😅)#2021-09-0113:22favilayou can also add your own higher-level, introspectable schema layer more easily if the enums are idents. You could have the attr itself reference the allowed ranges in a queryable way (vs being locked inside a predicate fn)#2021-09-0113:23favilaI think the blanket “use ident entities for enums” advice dates from before d/pull and attribute+entity predicates#2021-09-0113:23Jakub Holý (HolyJak)thank you!!!#2021-09-0113:23favilathe d/entity api represents ident entities as keywords when you navigate to them#2021-09-0113:24favilaand because there was no native higher-level predicate enforcement mechanism there were really no other considerations#2021-09-0113:24favilain that world the “keyword” choice is strictly less powerful#2021-09-0115:12JohnJthe d/pull thing with enums is definitely annoying 😉#2021-09-0115:15souenzzoon pull, have :my-ref-to-enum {:db/ident :my-enum} is actually better then :my-ref-to-enum :my-enum for UI/fulcro developers
It allow you to ask for labels in the same query that you ask for enums [{:my-ref-to-enum [:db/ident :app.ident/label :app.ident/description]}]#2021-09-0118:27stuarthallowayDatomic Cloud "Backup/Restore" FAQ:#2021-09-0119:12kennyHi Stuart. Thank you very much for an official response on this topic. We too have been using Cloud for several years and backup & restore has been an ongoing struggle. Since this is such a common topic, it would be awesome to add a page to the Cloud documentation with the information you have laid out below.{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 2")}
#2021-09-0120:19stuarthallowayHi Kenny. Agreed -- We will update the docs once this conversation is complete.{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-09-0118:28stuarthalloway1. Is my data safe against individual hardware failures? Very much yes -- Datomic Cloud stores data to multiple AWS storage services, each of which is itself redundant.#2021-09-0118:29stuarthalloway2. Does Datomic Cloud have a backup/restore feature that makes a complete second copy of a database? No, but we are looking into options for this.{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 6")}
#2021-09-0119:23kennyIf Datomic Cloud provided an official backup/restore feature, would it take the same path Tony took?#2021-09-0119:41Daniel Jomphehttps://clojurians.slack.com/archives/C03RZMDSH/p1630524054117900?thread_ts=1630357320.076200&cid=C03RZMDSH#2021-09-0119:42kennyI saw 🙂 I was hoping for more technical info on how Datomic might approach this from that product space, whatever they can share publicly, ofc.{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-3"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-09-0119:43Daniel JompheThanks for your own questions and comments, btw, kenny!{:tag :div, :attrs {:class "message-reaction", :title "slightly_smiling_face"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙂")} " 2")}
#2021-09-0120:20stuarthalloway@U083D6HK9 Those answers are on the other side of the design process, so I don't know yet.{:tag :div, :attrs {:class "message-reaction", :title "ok_hand::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-09-0118:32stuarthalloway3. In the absence of full backup/restore, how can I make a complete second copy of a Datomic database? You can use the log API to get a complete, time ordered copy of the data and save it anywhere, and/or transact it into a second Datomic system as you go. I have not read the code and so cannot comment on correctness, but @tony.kay’s https://github.com/fulcrologic/datomic-cloud-backup demonstrates the idea.#2021-09-0119:23kennyWe have a solution conceptually identical to Tony's. I ran it on one of our production databases (100+m datoms). While ours does not hit a crash like Tony has seen, the speed at which it moves is far too slow. We were looking at multiple weeks of 24/7 run time to do a copy from start. Both the source VM and the target Cloud system were running at below 10% CPU utilization. I do not know of a way around this due to the inherently serial nature of a restore via the log API. The total bytes transferred over the network was also incredibly low. Are there any methods to improve the full restore time with this method or is that just how it goes?#2021-09-0120:23stuarthalloway@U083D6HK9 Are you running inside an ion so that you can eliminate one hop of network latency?#2021-09-0120:52kennyThat test was not running in an Ion. I was running locally through my REPL to a Cloud system. Although I don't have concrete evidence readily available, I do not think network latency is the bottleneck.#2021-09-0118:40stuarthalloway4. Why can't I just copy Datomic's S3 and/or DDB tables to another region? Datomic uses region-specific KMS keys for encryption, so copying data at the storage level requires (at least in the naive case) decrypt and re-encrypt steps.#2021-09-0119:47Daniel JompheI intuit that this would be the most practical solution.
With the con that it wouldn't allow to filter anything in the process.
Still, might be a great way to provide a feature that would be used by clients.
OTOH I'm happy to see what product-space solution you come up with.#2021-09-0118:56stuarthalloway5. Can I create a new Datomic database and guarantee the same :db/id values as an existing database? Not at present. Also something we are looking at.#2021-09-0119:33kennyMost often a complete restore of the history is desirable, however, we do have some use cases where just "restoring" the current state of the db -- no history -- would be sufficient. The only reason to consider such an approach, for our use cases, is restore speed. If a complete restore would be just as fast as a current state restore, then we'd prefer the former. Given that a complete restore is substantially slower than a current state restore, we wrote a process to do the latter. This process, named a "current state restore," is still bottlenecked by the need to map old ids to new ids, preventing us from applying any parallelism. With the guarantee that a new database would have the same :db/id values as an existing database, the bottleneck could be removed, allowing us to parallelize as much as the target system would allow for.#2021-09-0120:26stuarthallowayHow does your "current state restore" organize transactions?#2021-09-0120:50kennyAt a high level: 1) reads the eavt index into batches, 2) resolves each batch of datoms against the current old-id->new-id mapping, 3) transacts the resolved datoms, and 4) updates that mapping. The process must be serial due to not knowing how to update the mapping until after the transaction is done.#2021-09-0211:17stuarthallowayWhat is the batch size? Does increasing it speed things up?#2021-09-0215:02kenny500. I'll give it another shot at 1,000.#2021-09-0217:03kennyJust following up here. I don't have any evidence from my previous attempts stored anywhere unfortunately. The current state restore ran for 57 minutes before throwing an exception in a d/datoms call (will ask a question on that in a separate thread). At that final time, 5,490 transactions succeeded and 2,746,671 datoms had been transacted (sum of the count of :tx-data returned from d/transact). I have attached a screenshot of the CW dashboard for the query group that was used to read datoms. Upon revisiting this, it is unclear whether the bottleneck is from reading datoms via d/datoms or transactions.#2021-09-0400:00kennyFollowing up again... I have rewritten the datom reading portion to run in parallel. I also added some additional monitoring to get out the queue backlog of number of datoms waiting to go through the serial transact & mapping process. The queue stays pegged at 100% utilization (backlog of 20k datoms). So, I can now confirm, that it is the serial transact & mapping process that is slowing down the current state restore.#2021-09-0211:17joshkhhmm. i'm running into an issue using dev-local 0.9.235 when trying to import one of our cloud dbs.
(dl/import-cloud
{:source ...,
:dest ...})
Importing...................Execution error (ExceptionInfo) at datomic.core.anomalies/throw-if-anom (anomalies.clj:94).
Item too large
java.util.concurrent.ExecutionException: clojure.lang.ExceptionInfo: Item too large {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "Item too large", :datomic.dev-local.log-representation/count 24356559, :datomic.dev-local.log-representation/limit 1000000, :datomic.dev-local.log-representation/datom-eid 23978149582305561}
23978149582305561 is an entity with an attribute that has a very large string value, and is unfortunately stuck in our history. is there a way around this?#2021-09-0320:18Joe LaneHey @U0GC1C09L, can you import up to the bad transaction, transact all non-bad attrs of the bad transaction, then import again starting at t+1 of the bad transaction?#2021-09-0610:15joshkhhey Joe, thanks for the response. i suppose i could, but this is part of a larger workflow to backup and "restore" Cloud dbs. if possible i'd like to avoid coding in edge cases for specific databases, or catching exceptions and iterating the import process for "bad" datoms that clearly do exist in the history. i will if we need to, but this feels like an issue with dev-local and a clash between its constraints and the constraints of datomic cloud#2021-09-0610:57joshkhi'm picturing some interesting cases for tracking all of the datoms in the skipped transaction, and then deciding how to handle future transactions against them as we replay the transaction log. for example: only replay retractions of skipped datoms when there has been an addition between a skipped transaction and the transaction being replayed. oof.#2021-09-0217:05zalkyHi all, the Datomic query docs say that you cannot use a database source within a rule. The implication of this is that you also cannot use built-in expressions like missing? in rules, is that correct?#2021-09-0217:22favilaWhere do you see that? Rules are scoped to a single datasource and cannot realias, but you can invoke them with a different datasource ($ds rulename …) and inside the rule $ is available#2021-09-0217:10kennyCalling d/datoms returns an Iterable of datoms (Client API). For error handling, it points you to the namespace doc which states that all errors are reported via ex-info exceptions. My objective is to do a complete iteration through all datoms returned from d/datoms. My iteration (via reduce) made its way through a large number of datoms before throwing an exception while unchunking the d/datoms result (full stacktrace in thread). What is the recommended way to retry retryable anomalies thrown from a d/datoms chunk?#2021-09-0217:10kenny#2021-09-0217:12hadilsHi. I finally deployed my first lambda ion from my Fulcro app. I have the latest Datomic Cloud set up. I am getting a connection refused when I try to invoke the L,ambda function — it is trying to connect to a host in the VPC. How do I go about troubleshooting this? Is it an IAM problem or a problem with the VPC gateway?#2021-09-0221:03Jakub Holý (HolyJak)I suppose the target hosts security group allows connections on any port from the VPC?#2021-09-0217:45souenzzoCan I assume that :db/txInstant is unique?
I planned originally to save the t reference to an older db
but once I don't have t->tx funciton anymore, I can't create it for older values#2021-09-0217:46favilaNo#2021-09-0217:46favilagenerally t and txes are interchangeable in any time-filtering functions#2021-09-0217:46souenzzohow do I point to an older point in time?
should I use t or tx?!#2021-09-0217:46favilaYou need a “real” tx if you want to look at the TX entity itself#2021-09-0217:47favilabut for things like as-of, tx-range, sync, etc, they accept T or TX#2021-09-0217:51souenzzoIs there a problem to have an entity pointing to a transaction?!#2021-09-0218:17favilano#2021-09-0218:17favilatransactions are entities#2021-09-0218:18favilaThat’s how this technique is possible: https://docs.datomic.com/on-prem/best-practices.html#add-facts-about-transaction-entity#2021-09-0217:51souenzzoIs there a problem to have an entity pointing to a transaction?!#2021-09-0218:00souenzzowhy datomic client api do not have t->tx and tx->t functions?#2021-09-0218:01souenzzoCan I use this?
(defn t->tx
[t]
(+ t 13194139533312))
(defn tx->t
[tx]
(- tx 13194139533312))
#2021-09-0218:04Alex Miller (Clojure team)I'm no expert, but I don't think these things have that relationship in cloud, so no#2021-09-0218:19souenzzoI tried to do a :thing/as-t that points to a point in the pass
But I can't use this because I need to create it for older entities and I don't have the t anymore
So I changed my approach: :thing/as-tx
Now is easy to create thing entities for older entities in DB, but it's hard to create for newer ones, once for newer ones I get the t from the db and I can't save the t value#2021-09-0218:20souenzzoI can create a :thing/as-of where sometimes it is a t and other times it is a tx
This is a good idea?#2021-09-0218:21favilaCan we step back? what problem are you solving?#2021-09-0218:22favilaAre you accidentally falling into this trap? https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html{:tag :div, :attrs {:class "message-reaction", :title "point_up"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("☝️")} " 2")}
#2021-09-0218:22souenzzoI need to create a entity that references another entity in a exact point in time.#2021-09-0218:23favilaPutting the modeling question aside, how do you decide on what moment in time?#2021-09-0218:25souenzzosomething like: this report entity is generated from this entity at this db.#2021-09-0218:27favilahow do you arrive at “this db”?#2021-09-0218:31souenzzoAt this moment, my code is:
(defn do-report
[db id]
.... {:tx-data [... {:report/db (:t db)}]})#2021-09-0218:36favila(You never run that fn with a filtered (e.g. as-of) db?)#2021-09-0218:37favilaI agree not having t->tx is annoying, and I’m concerned by alex’s comment, it’s a pretty fundamental relationship and difficult to imagine cloud being different#2021-09-0218:38favilaIt’s quite easy to write yourself (just some bit-masking) but alex is giving me pause#2021-09-0218:38favilahowever, you may be better off querying for a specific tx entity to use, then using that with an as-of; or you could use tx-range to find the transaction corresponding to the basis T and inspect its data for the :db/txInstant assertion#2021-09-0218:39favila(def ^:const ^long MASK-42
2r000000000000000000000111111111111111111111111111111111111111111)
(def ^:const ^long TX-PART-BITS
2r000000000000000000011000000000000000000000000000000000000000000)
(defn tx->t ^long [^long t]
(bit-and MASK-42 t))
(defn t->tx ^long [^long t]
(bit-or TX-PART-BITS (bit-and MASK-42 t)))
#2021-09-0218:39favilaThis definitely works for on-prem#2021-09-0218:40favilaThe “tx-part-bits” is just the number 3 (= the entity-id of the “tx” partition) shifted over 42 bits#2021-09-0218:41favilad/entid-at on on-prem lets you compute entity-ids for arbitrary partitions#2021-09-0309:30TwanIs ?sslmode=require respected on Postgres connections for Datomic peer server? We're not sure if it is, but we'd like to enforce SSL on all our Postgres connections. This far, we were unable to connect via SSL#2021-09-0317:25Jakub Holý (HolyJak)Hi! Is it possible to use datomic.client.api against on-prem Datomic, from a peer server itself? (B/c I want to use ragtime.datomic, which uses the client API) https://docs.datomic.com/client-api/datomic.client.api.html#var-client it would seem I need to run a separate peer server?!#2021-09-0317:27Jakub Holý (HolyJak)FYI typo in the official docs at https://docs.datomic.com/on-prem/overview/clients-and-peers.html
> begin with Datomic dev-local, which [should be with?] the client library in-process.#2021-09-0320:10kennyIs the d/datoms :eavt index guaranteed to be sorted in ascending order of of :e? e.g., the following is true
(= (sort-by :e (d/datoms db {:index :eavt})) (d/datoms db {:index :eavt}))#2021-09-0320:21favilathe name of the index is the sort order of the index{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-09-0320:53ghadibut eav are ascending, t is descending#2021-09-0320:12ghadi@kenny yes https://docs.datomic.com/on-prem/query/indexes.html#basics{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-09-0320:48kennyI'd like to iterate through the d/datoms eavt index as fast as possible, applying parallelism if possible. Are there any techniques for doing so? Using :offset & :limit to batch doesn't seem like an effective strategy since d/datoms will need to walk the whole datoms index regardless.#2021-09-0320:53favilaIf you’re using the sync API, you should pretty much never use offset+limit#2021-09-0320:54favilajust keep consuming it#2021-09-0320:54favilaoffset+limit will turn it into O(n!); just consuming it will continue from whatever chunk pointer it has#2021-09-0320:54favilaincreasing :chunksize can help too#2021-09-0320:55favilafor doing stuff in parallel, if you can use :AEVT instead, you can find all the :As and issue a d/datoms for each one#2021-09-0320:56favila[:find ?a :where [:db.part/db :db.install/attribute ?a]]#2021-09-0320:57favilathat gets you all attributes. Then (d/datoms :aevt a1/2/3/n)#2021-09-0321:00kennyOh, that's an excellent idea! Thank you!!#2021-09-0321:00kennyWhy does changing the chunk size have an impact?#2021-09-0321:00favilaIt sends more datoms at a time#2021-09-0321:00favilaIt seems to be faster from experience 🤷 YMMV#2021-09-0322:53kennyDoes the sync client API officially support passing :chunk? From the code, I see that it happens to work right now since the sync arg map is just passed to the async api.#2021-09-0323:00favilahttps://docs.datomic.com/cloud/client/client-api.html#chunking#2021-09-0323:00favila> https://docs.datomic.com/client-api/datomic.client.api.html functions are designed for convenience. They return a single collection or iterable and do not expose chunks directly. The chunk size argument is nevertheless available and relevant for performance tuning.#2021-09-0323:01favilain addition, AFAIK the sync apis are implemented with the async ones, so this makes sense{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-09-0323:07kennyGosh, you're a master of the docs. IMO, should be a part of the actual API docs 🙂#2021-09-0323:07kennyi.e., https://docs.datomic.com/client-api/datomic.client.api.html#2021-09-0323:15kennyCurious, have you had to handle exceptions thrown while traversing d/datoms? Ending up with some wacky feeling code:
(defn read-datoms-with-retry!
[db argm dest-ch]
(let [datoms (d/datoms db argm)
*offset (volatile! (:offset argm 0))]
(try
(doseq [d datoms]
(async/>!! dest-ch d)
(swap! *offset inc))
(catch ExceptionInfo ex
(if (retry/default-retriable? ex)
(do
(read-datoms-with-retry! db (assoc argm :offset @*offset) dest-ch)
(log/warn "Retryable anomaly while reading datoms. Retrying from offset..."
:anomaly (ex-data ex)
:offset @*offset))
(throw ex))))))#2021-09-0323:17kennydoseq is wrapped in try/catch b/c seq chunks are realized in doseq, not in d/datoms call.#2021-09-0506:22popeyeI am trying to install datomic in my local and getting below error, Can anyone help me please#2021-09-0513:13emccueTry deleting your .mvn/ directory - idk whats happening exactly but that might be part of it#2021-09-0610:46popeyelook like issue with tha java version, looks fine with java 8#2021-09-0613:19Jakub Holý (HolyJak)This is weird, I get this error
:cognitect.anomalies/message "Attribute values not unique: :component/$order",
:db/error :db.error/lookup-ref-attr-not-unique
for this attribute that is NOT unique
{:db/ident :component/$order
:db/valueType :db.type/bigdec
:db/cardinality :db.cardinality/one}
why could that be? Or am I misunderstanding the error?{:tag :div, :attrs {:class "message-reaction", :title "white_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✅")} " 2")}
#2021-09-0613:36favilaYour txdata contains or expands to something like this: [:component/$order something]. This looks like a lookup ref. It is attempting to resolve it, but it can't#2021-09-0613:37favilaBecause that makes no sense#2021-09-0613:37favilaNote error is “lookup-ref-attr-not-unique”#2021-09-0614:01Jakub Holý (HolyJak)ah, thanks a lot! I will check the tx data.
That is the experience-powered art of interpreting error messages 🙂#2021-09-0614:09Jakub Holý (HolyJak)That was it, my tx data preparation pipeline lacked (into {})#2021-09-0614:02Jakub Holý (HolyJak)Solved: Should have been (pull ?a [*])
What is wrong with my pull here:
(d/q '[:find (pull ?a)
:where [?a :db/ident :component/$order]]
db)
; => IndexOutOfBoundsException at datomic.query/normalize-pull
? 🙏{:tag :div, :attrs {:class "message-reaction", :title "white_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✅")} " 2")}
#2021-09-0614:43César OleaHello everyone! I'm currently evaluating Datomic Cloud version 884.9095. For production I have a split stack system running a primary compute group and a separate query group.
My question is for staging and beta. I read somewhere that query groups could be used to separate environments: dedicate one query group for staging, one for beta, etc. I guess I can have a "beta" query group, have it use a separate database to keep the production database clean, but if I understand things correctly writes would still be processed by the primary compute group. This would be a problem if we needed to do load testing for instance.
Another option would be to run separate Datomic systems: a production stack for each environment, which solves the problems above at increased maintenance and cost.
How are you managing your separate environments? Thanks!#2021-09-0616:03tatutwe had completely separate AWS accounts for dev / staging and production... I think that's "safest" as everything is completely isolated, haven't tried running in the same datomic#2021-09-0616:20lispers-anonymousSeparate datomic systems and/or aws accounts per environment is a safe bet. It allows us to upgrade the datomic version in lower environments first, make sure everything works. Then upgrade datomic in production at a later time.#2021-09-0617:13César OleaGot it, thanks for your recommendations! I agree, safest to use separate Datomic systems and/or AWS accounts. Good to have opinions on people already running it in production. Thanks!#2021-09-0617:16stuarthalloway@U02DNF3TW3E most importantly for cost purposes -- turn off your load testing system when not running load tests 🙂#2021-09-0617:19César OleaThat's a good point @U072WS7PE it's great to be able to scale in the compute group when not in use, similar to how aurora serverless works. BTW I've enjoyed your recorded Day of Datomic videos, good stuff!#2021-09-0706:30Faiz Haldecould someone answer this (forgot to first check if there was a channel dedicated for datomic here)? https://stackoverflow.com/questions/69078793/datomic-durability-guarantees-under-storage-failure thanks2#2021-09-0711:46favilaThe “durability” page you link explains how datomic uses storage. Most of what it writes is immutable (ie the entry is never updated in storage), so there’s nothing to invalidate. The mutable parts are few, tiny, and updated by only a single writer (the transactor) and reference immutable trees. (In fact the transactor is the only process which writes anything to storage—peers are read-only). These make datomic very tolerant of storage misbehavior-it just really doesn’t depend on updates to the same entry from multiple sources, and that’s where most of the tricky code (and bugs) are. You will get an error rather than an incorrect query result. That said it’s unreasonable to expect datomic to work with completely broken storage.#2021-09-0712:11Faiz Haldeok, my understanding might be very limited here. I was talking about inserts only
the scenario i was referring to was
1. let's say there's an empty datomic database
2. writer inserts a fact (Cassandra acknowledged it)
3. client immediately reads the fact (I'm guessing the client/peer node caches the result)
4. some failure happens on Cassandra and the acknowledged writes disappear (which seems to be possible with Cassandra as per jepsen, not sure if it's fixed in the newer versions)
So now there's a disparity between the peer/client cache (unsure who caches) and Cassandra storage. Am I making sense in this scenario?
Datomic does recommend to replicate the data 3 times for Cassandra so there's certainly a high degree of reliability#2021-09-0712:21favilaPeers cache, but the cache may be pre-filled by a transactor if they share the same cache. In your scenario, a key is simply missing. The moment a peer needed that segment and it is not in a cache layer, datomic would fail with an error.#2021-09-0712:25favilaThe jepsen article says these lost writes were from the “limited transaction” mode. I don’t see why datomic would use that.#2021-09-0712:27favilaI have no knowledge of datomic’s Cassandra implementation nor have I ever used it, but I know datomic doesn’t need cross-key transactional writes.#2021-09-0714:12Faiz Haldeok thanks2 I'll try to read further about its internals#2021-09-0714:18favilaThis may help also: https://tonsky.me/blog/unofficial-guide-to-datomic-internals/{:tag :div, :attrs {:class "message-reaction", :title "thanks2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "thanks2", :src "https://emoji.slack-edge.com/T03RZGPFR/thanks2/50a7eae6ac7040b9.gif"}, :content nil})} " 2")}
#2021-09-0714:18faviladatomic uses storage as a pretty-dumb key-value store of binary blobs where the vast amount of data (key count and byte-count) is immutable#2021-09-0714:20favilathere’s just not much that can go wrong. The datomic cloud product doesn’t even bother with a database: the primary datastore is s3 (with dynamo for that handful of mutable keys I mentioned) with multiple cache layers on top{:tag :div, :attrs {:class "message-reaction", :title "smile"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😄")} " 2")}
#2021-09-0714:20favilathe mutable keys are never cached#2021-09-0712:12Jakub Holý (HolyJak)Is this the right approach to finding all the people that NEVER owned a Tesla, i.e. this combination of d/history (to filter out also people that had a Tesla in the past but do not have it anymore) and not ? 🙏
(d/q '[:find ?e
:where [?e :person/id]
(not [?e :person/car ?car]
[?car :car/make "Tesla"])]
(d/history db))
#2021-09-0712:31favilaSeems right#2021-09-0714:11Jakub Holý (HolyJak)Thank you!#2021-09-0712:21Jakub Holý (HolyJak)Hi @augustl @magnars ! You have used on-prem in the past, any idea whether it is possible / how to use with it - from my application = peer process - the library https://github.com/hden/ragtime.datomic, which builds on the datomic.client.api instead of the Peer datomic.api? I would rather not have to start https://docs.datomic.com/on-prem/peer/peer-server.htmlhttps://docs.datomic.com/on-prem/peer/peer-server.html so that the first peer instance can use it when running Ragtime... 🙏#2021-09-0712:30augustlI've never felt compelled to wrap the datomic API so I have little experience to share on that front :){:tag :div, :attrs {:class "message-reaction", :title "crying_cat_face"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😿")} " 2")}
#2021-09-0713:33FredrikReplacing API calls such as (satisfies? client-protocols/Connection conn) with`(instance? datomic.Connection conn-mem)` , and (datomic/transact conn {:tx-data tx-data}) with (datomic/transact conn tx-data should probably suffice.{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 2")}
#2021-09-0713:57souenzzoHello. Can I use #"regex" as a parameter of a query in datomic.client.api ?
I used to do that in on-prem.
Testing in dev-local it works, but i suspect that if I run it in cloud, it will have some isssue in serialization/parsing.
(example in thread)#2021-09-0713:57souenzzo(let [client (-> {:server-type :dev-local
:system "hello"}
d/client
(doto (d/delete-database {:db-name "hello"})
(d/create-database {:db-name "hello"})))
conn (d/connect client {:db-name "hello"})]
(d/q '[:find ?n
:in $ ?re
:where
[?e :db/ident ?ident]
[(name ?ident) ?n]
[(re-find ?re ?n)]]
(d/db conn) #"^[a-t]+$"))
=>
[["add"] ...]#2021-09-0715:12grzmgood morning, all! Looking at the REBL page on https://docs.datomic.com/cloud/other-tools/REBL.html#nREPL I see that the id for the nREPL section is spelled wrong (`<h3 id="nRPEL">nREPL</h3>`) which makes the anchors fail.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-09-0717:24octahedrionI'm using datomic.ion.dev :as dev for Ions deployment and I can successfully (dev/push {}) , deploy etc. However I can only do it once -- on subsequent times I get:
Execution error (RejectedExecutionException) at java.util.concurrent.ThreadPoolExecutor$AbortPolicy/rejectedExecution (ThreadPoolExecutor.java:2055).
Task java.util.concurrent
and the only way I can find to fix it is to restart my REPL which is something nobody wants to have to do. How can I make it work repeatedly ?#2021-09-0717:45tony.kayI'm working on this datomic streaming backup library, and I am seeing a Datom from our production database that I do not understand. Perhaps someone can fill in a gap in my knowledge, or perhaps I just have corruption in the database.
I have a Datom [E A V] where A is a ref attribute. If I look (using the datoms) at the source database on the :vaet index I definitely have this Datom; however, when I try to pull this "target" entity I get nothing...well, technically I get this:
(d/pull db '[*] 69409970038386856)
=> {:db/id 69409970038386856, :inventory/_product ...}
when I try to find an EAVT with that V as the E...nothing. There is no "target" entity for this ref attribute.#2021-09-0718:21tony.kayActually, it looks like creating a ref to nowhere is perfectly fine in Datomic...#2021-09-0718:22tony.kayI can just make up a random int and point a ref at it#2021-09-0718:23tony.kaySo, probably a mistake in the code somewhere made that...I guess I'll just elide those in restores#2021-09-0717:45tony.kayis that corruption, or did I miss something in Datomic class?#2021-09-0717:48tony.kayI did queries with as-of databases as well, and don't see the target at the t just before the transaction that asserts this ref datom#2021-09-0718:43stuarthalloway@tony.kay A better way to look at it is "there is no such thing as nowhere". An entity is logically derived from all datoms with a common e. There are many valid modeling reasons that the number of such datoms could be zero.#2021-09-0718:45tony.kayBut if I try to transact a new entity with that explicit E later, Datomic refuses to...so how would I ever "add facts"?#2021-09-0718:46tony.kayIt just seems to me that the lack of a ref should be tied hand-in-hand with a lack of facts to point to#2021-09-0718:46stuarthallowayat all points in time?#2021-09-0718:47stuarthallowayat one point in time?#2021-09-0718:47tony.kayI add EAV where A is a ref and V is a long. Later, I add EAV with E as that original V...Datomic refuses#2021-09-0718:47stuarthallowayThat is not always true.#2021-09-0718:48tony.kay🤷#2021-09-0718:45stuarthalloway'Entities do not "exist" in any particular place; they are merely an associative view of all the datoms about some E at a point in time. If there are no facts currently available about an entity, Database.entity will return an empty entity, having only a :db/id.' -- https://docs.datomic.com/on-prem/overview/entities.html#2021-09-0718:49stuarthalloway@tony.kay I think the most helpful doc improvement in this area would be a table enumerating the validity constraints enforced by Datomic at transaction time. With that, at least it would be clear that what you are imagining differs.#2021-09-0718:50stuarthallowaySecond most helpful would be more explanation of 'why'.#2021-09-0718:50tony.kayYes, such documentation would be helpful...but for the present case: How would I ever go about making that ref point to real facts in later time? I'm not able to manage it#2021-09-0718:50tony.kayDatomic just refuses to let me use that number (I put in as V for a ref datom) as an E in a new entity#2021-09-0718:51tony.kayoh...do I have to resolve the E of that datom and tie it together in the tx?#2021-09-0718:53stuarthallowayThis is still about copying an existing db, right?#2021-09-0718:53tony.kayeventually...should just elide those datoms or not I guess.#2021-09-0718:53tony.kaybut also about a hole in my understanding as well#2021-09-0718:56stuarthallowaywhat error do you get when Datomic rejects your datom?#2021-09-0718:56tony.kayI guess in the usage of such a thing, I can always re-transact the edge with a new entity...and the old random long that was the V will be replaced. So, no big deal.
On restore, how would I go about picking a value to restore this edge?#2021-09-0718:56tony.kay(d/transact c {:tx-data [[:db/add "thing" :invoice/items 82387432]]} )
=>
{:db-before #datomic.core.db.Db{:id "5b35b695-b633-4376-b899-b28c97fdd596",
:basisT 85,
:indexBasisT 0,
:index-root-id nil,
:asOfT nil,
:sinceT nil,
:raw nil},
:db-after #datomic.core.db.Db{:id "5b35b695-b633-4376-b899-b28c97fdd596",
:basisT 86,
:indexBasisT 0,
:index-root-id nil,
:asOfT nil,
:sinceT nil,
:raw nil},
:tx-data [#datom[13194139533398 50 #inst"2021-09-07T18:56:20.982-00:00" 13194139533398 true]
#datom[79164837200686 316 82387432 13194139533398 true]],
:tempids {"thing" 79164837200686}}
(d/transact c {:tx-data [[:db/add 82387432 :invoice-item/precise-quantity 3.4]]} )
Execution error (ExceptionInfo) at datomic.core.error/raise (error.clj:55).
:db.error/invalid-entity-id Invalid entity id: 82387432#2021-09-0718:56tony.kayI picked 82387432 out of thin air#2021-09-0718:57stuarthallowayWhenever you do not have an id yet, tempids are the right answer -- then update your knowledge from the tempid map after the transaction#2021-09-0718:58stuarthallowayso picking "82387432" (note the quotes) out of thin air would be fine, albeit not very semantic#2021-09-0718:58tony.kaybut my point was to test what happens if someone has put an arbitrary value as a ref's target...like how you would heal it...and as I said, it looks like you just re-transact the edge and get a new ID#2021-09-0718:59tony.kaybut from a "restore" perspective, what is the right thing to do? I cannot jsut use a tempid as a valud#2021-09-0718:59tony.kaythat is rejected as well#2021-09-0719:00stuarthallowayHow could "someone put an arbitrary value" -- they would have the same problem you did, being rejected by Datomic.#2021-09-0719:00tony.kayThat first transaction WORKS#2021-09-0719:00tony.kayeven though that is a random, unused value in Datomic#2021-09-0719:00tony.kayand :invoice/items is a ref many#2021-09-0719:00stuarthallowayBut who cares? No subsequent transaction can say anything else about the entity.#2021-09-0719:00tony.kayright, so I should elide that datom in the restore, right?#2021-09-0719:01stuarthallowaySo it is dead on arrival, and your restore process can know that the code that made it was buggy, and just drop it.#2021-09-0719:01tony.kayOK, that is the conclusion I had come to...was just making sure I wasn't missing something#2021-09-0719:01tony.kaylike somehow an edge to nowhere was a useful "fact" in itself#2021-09-0719:02stuarthallowayRe: elide -- yes, but I would be worried that that original program was losing data you did not know about.#2021-09-0719:02tony.kaythat was my worry, thus my question...#2021-09-0719:02stuarthallowayAn edge to nowhere can be a useful fact -- maybe it was not nowhere at some other point in time.#2021-09-0719:02tony.kayDatomic removes refs on the referent being removed, no?#2021-09-0719:03tony.kayoh#2021-09-0719:03tony.kaybut only for retractEntity#2021-09-0719:03stuarthallowayThe problem you are describing is not merely an edge to nowhere, it is an edge to neverwhere{:tag :div, :attrs {:class "message-reaction", :title "simple_smile"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "simple_smile", :src "https://a.slack-edge.com/80588/img/emoji_2017_12_06/apple/simple_smile.png"}, :content nil})} " 4")}
#2021-09-0719:04Daniel Jomphe(feels like a fairy tale) - I'm watching your convo as I'm going to try out Tony's lib on our db.#2021-09-0720:19tony.kayfeel free to help out 🙂#2021-09-0720:29Daniel JompheWill try, although I'm only starting out with Datomic!#2021-09-0720:30tony.kayI hope these last two bugs I just fixed will be the last...but this is my 15th attempt at a restore.#2021-09-0720:31tony.kayat least the "resume" an interrupted restore seems to be working well, so I'm not having to start over from scratch#2021-09-0720:31Daniel JompheThat's a great property!#2021-09-0720:32tony.kayat this speed of restore I'm looking at 44 DAYS to do one...having to start over isn't a thing I can stand#2021-09-0720:33tony.kayMake sure you're using 0.0.15, which is bw compatible with backups made since I had to rework the tempid remapping logic.{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-3"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-09-0720:34tony.kay0.0.12#2021-09-0720:35tony.kaySo, if you made a backup with a version older than that, it is no use, and never was. Still working out the kinks.#2021-09-0720:35tony.kayDon't expect the backup to change again. The streamed data was fine...I just wasn't storing the right metadata.{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-3"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-09-0720:36Daniel JompheSo this is going to be a permanent process running alongside the apps.
The operational characteristics of the solution are a bit unclear to me for now.
Each of our own environments is its own AWS account with its own Datomic stack.
So it looks like I'd need my prod ion to include the backup process, pushing to a shared s3 bucket, and my e.g. staging ion to run the restore process, but then this staging db should be read-only, etc. etc., lots of sub-questions appear.#2021-09-0720:38Daniel JompheSince our db is so small for now, I should just start experimenting in a single environment, with in-memory storage, than scale-up the ops gradually from there, I suppose.#2021-09-0720:38tony.kayThe primary use-case is streaming replication: Yes, a node in your production stack continuously runs backup-segment! (after a delay, or some number of txes have occurred..up to you). That puts the next tx-range into storage (typically s3). Then on a different system (in my case it needs to be in diff region) I run a simple app that does nothing but run restore-segment! in a loop, which adds those txes to the target db.#2021-09-0720:39tony.kayAnd yes, if you transact something new against that target, it is now no longer capable of restoring from the original, because the tx-time will be too far in the future#2021-09-0720:39tony.kayThe only time you'll ever write to that db other than restore is in a disaster recovery, where it is now the new "master"#2021-09-0720:39Daniel JompheSo we should probably restore to 2 dbs in case we need to stand-up one of those!#2021-09-0720:40tony.kaysure, can restore to as many as you want (to pay for 😉 ) at a time#2021-09-0720:40tony.kaybut if you "failed over" that means your original is trash, and you probably don't or cannot get any more data from it#2021-09-0720:42tony.kaybut you might more than one restore running just in case one of your restores has a critical failure...because it takes so long to get a restore "caught up". At least you wouldn't have a DR time gaps of weeks/months waiting for the new restore to get caught up#2021-09-0720:45Daniel JompheIt does feel like we're going to have to invest weeks/months of careful planning and trials into this!#2021-09-0719:04tony.kayyeah, I did scan the history looking for related facts, but found none...coworker is certain it is from a bad data fix in the past#2021-09-0719:05tony.kayuseful thing to have 58M transactions to test your OSS library against 😄{:tag :div, :attrs {:class "message-reaction", :title "slightly_smiling_face"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙂")} " 2")}
#2021-09-0719:07stuarthalloway"Alexa, tell @jaret to make a task for me to make a table documenting tx constraint enforcement."{:tag :div, :attrs {:class "message-reaction", :title "picard-facepalm"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "picard-facepalm", :src "https://emoji.slack-edge.com/T03RZGPFR/picard-facepalm/37b1e28762160297.gif"}, :content nil})} " 4")}
#2021-09-0719:08Daniel JompheTo be pronounced Alexslack#2021-09-0719:10ghadihave encountered the same situation in my own transaction replayer script#2021-09-0719:12ghadiI think the strategy I took was to drop all heretofore unseen eids that appeared in :v position without ever appearing in :e position#2021-09-0719:15Daniel JompheWhat to think of this... 4 cognitects in the last 10 minutes chiming in about Datomic Cloud backup & restores. 🙂{:tag :div, :attrs {:class "message-reaction", :title "crossed_fingers::skin-tone-3"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-09-0719:20jaretHaha! well, to be fair it's been on our minds for 4 years 😉.{:tag :div, :attrs {:class "message-reaction", :title "smile"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😄")} " 2")}
#2021-09-0719:22Daniel JompheHaha! Let it fly, who knows this time if it'll finally materialize into something concrete. 🙂#2021-09-0719:24stuarthalloway@tony.kay important non-obvious detail for your work -- You should not assume that Datomic's built-in attributes have the same entity ids in all databases, therefore you should maintain a table from attribute keyword to entity id.#2021-09-0719:26stuarthallowayIt might be very tricky to discover this assumption in testing, as all dbs made in your account might very well have common set of attribute entity ids.#2021-09-0719:26tony.kayright, I did not expect automated tests to catch that kind of thing, since there is no version diff#2021-09-0719:28tony.kayYeah, I track it via:
(id->attr (d/as-of db #inst "2000-01-01"))
#2021-09-0719:29tony.kaydon't remember why I didn't want the user's schema in this mapping...but I am tracking them#2021-09-0719:30tony.kayOh, right, I don't need their's because I'm tracking the real-time remappings of IDs, including those of user schema ident entities#2021-09-0719:30tony.kayso I just pulled the built-in ones since those don't get "restored"#2021-09-0719:30tony.kaythanks for the tip, but yeah, covered that one#2021-09-0719:42Joe Lane@tony.kay That approach assumes all Datomic attributes are present at the birth of the database, which isn't true for databases which upgraded their schema via https://docs.datomic.com/client-api/datomic.client.api.html#var-administer-system .
For databases that were upgraded, the attribute identifiers for tuple related idents will be introduced both at a later t than #inst "2000-01-01" and they will also have numbers similar to user defined attributes.
Where this can become extremely complicated is that the attribute identifiers for tuple related idents in the new destination database will be present at the birth of the database and have a lower, different number than from the source db.#2021-09-0719:45tony.kay@U0CJ19XAM so my restore algorithm treats any schema after that point in time as something to restore on the target, which you're right, could present a difficulty since that schema will be there in a "new db" but would also be in the txn stream from the old one.#2021-09-0719:46tony.kaywon't the :db/ident cause an "upsert" in that case, though? I'll have to analyze it#2021-09-0719:46tony.kay(we actually have a db that has this exact situation, so if my restore completes it will be a good indicator that what I did "works" for that case)#2021-09-0719:51tony.kayideas on how I might write a test around that?#2021-09-0720:02tony.kayguess I could find what that looks like in the backup and add that to a test to restore to a db that has it from birth#2021-09-0720:28tony.kaySo, I tried the upgrade txn against a db using my algorithm, and it seems OK. It turned the install into an upsert, was OK with the install attribute against a tempid that already existed, etc. I think the way the lib is written, it will naturally "just work" against that scenario @U0CJ19XAM, but thanks for pointing it out so I could try it to see.#2021-09-0720:30tony.kayI'm storing the "original ID" on the entities that have been created on the new db, so I can use that to resolve the remappings. So, this "upgrade schema" transaction put the mapping on the pre-existing schema in the new db, so that the future transactions in the stream should automatically remap the upgraded schema IDs to the new database's "original" schema IDs. I was worried that "install attribute" would be special, but it seems to have handled it ok.#2021-09-0719:25tony.kayright...pretty sure I'm doing that, even for the built-ins#2021-09-0719:25tony.kayThe a's always come across as integers, so I had to include that mapping in the backup info#2021-09-0807:44Ivar RefsdalI have a Datomic backend application that primarily writes and reads from the database. Sometimes however it needs to talk to external HTTP services and put response values from those into the database. These values are currently added as a part of a larger transaction. Sometimes those HTTP requests fail, and then the whole transaction fails.
What is a good strategy to solve this?
I'm thinking of adding a queue job in the database to make HTTP requests
in order to accomplish something like https://microservices.io/patterns/data/transactional-outbox.html
Is there a library for using Datomic as a queue consumer?
Thanks.#2021-09-0812:05Jakub Holý (HolyJak)Cloud or on-prem?
And I did not really understand
> These values are currently added as a part of a larger transaction. Sometimes those HTTP requests fail, and then the whole transaction fails.
A transaction is just data, and presumably you execute it after you got response from the remote, no? How can a failed/missing response make the tx fail? Do you have some tx function that checks for the presence of data returned from the remote? Do you need to do that as a part of a single, larger transaction? Cannot you handle the job queue in the app and write results into the DB in small, dedicated transactions? I guess I simply do not understand your use case well enough...#2021-09-0815:51César OleaNot sure if you're using cloud or on-prem. But if cloud and using ions, I would build an ion and wire the resulting lambda to be a consumer from sqs for example. When it's time to talk to the external HTTP service, publish a SQS message and let the ion handle the logic.#2021-09-0815:53César OleaHowever I'm very interested in using Datomic to implement the transactional outbox pattern. I was thinking on adding a stream to the DynamoDB tables that Datomic uses for persistence, but I'm not sure what data is contained where as there are multiple DynamoDB tables created. Hopefully this is documented somewhere.#2021-09-0818:27Joe LaneThat dynamo stream won’t have what you’re looking for @U02DNF3TW3E
It’s all encrypted fressian#2021-09-0818:30César OleaThanks @U0CJ19XAM you saved me hours of looking around. In that case what would be a good way to implement the transactional outbox pattern in a Datomic Cloud instance?#2021-09-0818:37Joe LaneThere are many ways to slice it that are better or worse depending on data volume, latency requirements, idempotency capabilities of producing and consuming systems etc.
I’d need far more info to give any sort of production ready recommendation. #2021-09-0818:51Joe Lane@UGJE0MM0W Do I understand this correctly, you're issuing HTTP requests from within a transaction function?#2021-09-1411:03Ivar RefsdalIt's on-prem.
I'm sorry about replying so late. I also obviously haven't explained myself well enough as multiple people have misunderstood me.
I've tried to explain the case better here: https://github.com/ivarref/yoltq#rationale
That repo is also a datomic queue implementation the transactional outbox pattern.
It should be portable to cloud as well @U02DNF3TW3E, but I suppose with a polling only strategy (no tx-report-queue).#2021-09-0812:23Jakub Holý (HolyJak)I am trying to write an interesting graph query and want to check with you whether there is a better approach. I have:
• A graph of components that might be connected by directional, labeled references (i.e. 2 components may be connected by multiple, different references)
• Components belong to a workspace but references can go from a component in one workspace to a component in another one
• => I want to fetch all components in a workspace with their references AND also the components at the other end of their references, if they are not already being fetched (i.e. if they belong to a different workspace)
• ... but I want to exclude components from such other workspaces that the user is not authorised to access
My approach is to do this in 4 steps:
1. Make a filtered DB so that only the workspaces and components the user can access are present
2. Pull all the components from the desired workspace together with their references and the IDs of the components at the other end
3. Extract IDs of all reference source/target components that have not been pulled (b/c they are in different workspaces)
4. Pull these missing components, by their IDs
Does that make sense? The query for 2.:
[:find (pull ?component [*
{:reference/_source [* {:reference/target [:db/id]}]}
{:reference/_target [* {:reference/source [:db/id]}]}])
:where [?component :component/workspace ?workspace-id]
:in $ ?workspace-id]
Then I would do 3., i.e. something like (def fetched-comp-ids (->> result (map :db/id) set)) and (pseudocode) (get-in result [ALL :reference/_source ALL :reference/target :db/id]) + similarly for reference/_target to diff those IDs against the fetched-comp-ids to get a set of extra-workspace-comp-ids. Finally, .4, I fetch those like so:
(d/q '[:find (pull ?component [* {:component/parent ...}]) ; include all ancestors too
:where [?component :component/$id ?component-id]
:in $ [?component-id ...]]
db extra-workspace-comp-ids)
Is there a better way?
Also, the 2. query will load all intra-workspace references twice, once for the target and once for the source component. Is that a performance or memory problem or is the DB smart enough not to waste any resources and does it use structural sharing not to waste any memory? Or should I rather only pull the reference IDs and fetch the references themselves in a separate query?
Thank you for any advice!!!#2021-09-0813:43Linus EricssonI think you try to solve two different problems with your pull expressions at once: 1) the relations for a certain component 2) the data for the components.
It's hard to give a definite answer of course, but in general pull expressions don't cache the various expressions, so the data structure for one component pulled twice will be identical but different objects. The primitive data will be structurally shared (I cannot see any reason why they wouldn't).
It's probably a good idea to use a filtered db to restrict access for a certain user!
To get all the data, I think you should first deduce which components (and pull expressions) that is requested for each component and then pull them in one go.
I would look closely into how the library pathom would solve this kind of problem.#2021-09-0813:47favilaI don’t see that you’re using named rules recursively in your query. Are you aware of this technique?#2021-09-0813:49favilaI’m also not sure how important the shape of the map projection is to you#2021-09-0816:06Jakub Holý (HolyJak)Thank you both!
> pull expressions don't cache the various expressions, so the data structure for one component pulled twice will be identical but different objects
Good to know! This sounds as something I would want to avoid.
And no, I am not using named rules. I am vaguely aware of them but not sure how they would benefit me here? I use recursion to get the (extra-workspace) component's parent and its parent etc. The pull expression for that seems simple enough?
The shape of the result is not critical, I can always reshape it in the code how I need.#2021-09-0816:09Jakub Holý (HolyJak)@UQY3M3F6D So if I understand you right, it would be better to
1. Pull all the components in the target ns, without references (there can be 10s of thousands of these at extreme cases)
2. Pull all the references that have these components as their source or target (I could either pass in IDs from 1. as a parameter or use Datalog to figure out the right references
3. Proceed as in my original plan, to get the IDs of the reference end components that I do not have yet and to fetch them
Correct?#2021-09-0816:24favila> I use recursion to get the (extra-workspace) component’s parent and its parent etc.#2021-09-0816:25favilaYou rely on db filtering to exclude “not-allowed” components?#2021-09-0816:27favilaIf you do and this filtering is done correctly, it seems you can just pull recursively from your target workspace and there is no step 3#2021-09-0816:27favilabut I don’t understand what output is desired. Recursive pulling at arbitrary depth?#2021-09-0816:39Jakub Holý (HolyJak)Yes, my plan was to leverage db filtering for this. Regarding parents - which I pull recursively - I know that if a component is "allowed" then its parents are as well. I want to fetch all the ancestors of a component, so yes, an unlimited recursion, though the number of these is normally quite low. I want to get a component including its :parent , which is also a component, including its :parent , ... until the component is the root, with no parent of its own. So
{:component/$id 3, ...
:component/parent {:component/$id 2, ...
:component/parent {:component/$id 1, ...}}}
#2021-09-0816:44Jakub Holý (HolyJak)I need step 3 because when I fetch the references in 2., some ends of these references - ie. some components - have not been pulled yet in 1. (because they live in an external workspace). I could fetch references together with the components but there I would fetch also lot of data I already have (the components in the workspace of interest + duplicates of components in external workspaces that are linked to multiple components in the ws. of interest). So the idea is to fetch just IDs (`{:reference/$id .., :reference/source <id of a component>, :reference/target <id of a component>}` ) and then fetch the missing ones.#2021-09-0818:48favilaI’m still not sure exactly what you’re after here, but my surprise was this seemed like a recursive-rule problem. E.g. I was expecting to see a rule like this:#2021-09-0818:48favila(d/q '[:find ?c1 ?ref ?c2
:in % $ ?user ?c
:where
(reachable-edges ?c ?c1 ?ref ?c2)
(user-accessible ?user ?c1)
(user-accessible ?user ?c2)]
'[[(user-accessible ?user ?component)
;; Donno the criteria here, making one up
(not [?component :disallow ?user])
]
[(refs ?comp ?ref ?comp2)
[(ground [:reference/source :reference/target]) [?ref ...]]
[?comp ?ref ?comp2]]
;; The immediate edge
[(reachable-edges ?comp ?comp1 ?ref ?comp2)
[(identity ?comp) ?comp1]
(refs ?comp1 ?ref ?comp2)]
[(reachable-edges ?comp ?comp1 ?ref ?comp2)
[(identity ?comp) ?comp2]
(refs ?comp1 ?ref ?comp2)]
;; the next edge over
[(reachable-edges ?comp ?comp1 ?ref ?comp2)
(refs ?comp _ ?comp-next)
(reachable-edges ?comp-next ?comp1 ?ref ?comp2)]
[(reachable-edges ?comp ?comp1 ?ref ?comp2)
(refs ?comp-next _ ?comp)
(reachable-edges ?comp-next ?comp1 ?ref ?comp2)]]
[[1 :reference/source 2]
[2 :reference/source 1]
[1 :reference/target 2]
[2 :reference/target 1]
[1 :reference/target 3]
[3 :reference/source 4]
[3 :disallow-user "user"]
[4 :reference/source 3]
[3 :reference/target 4]
[4 :reference/target 3]
[5 :reference/target 6]]
"user"
1)#2021-09-0818:48favilasomething that walked the component refs recursively and built a list of edges#2021-09-0818:49favilafiltering them by accessiblity#2021-09-0908:20Jakub Holý (HolyJak)Oh, I see. It is far simpler than that. Sorry for not being able to express it properly! I will try again, better:
1. I have 3 kinds of relevant entities: workspaces that group components and references that connect two components that might or might not be in the same workspace
2. What I want is 3 lists: (a) all components in a given workspace, (b) all references that start or end at one of these components, (c) all components from other workspaces that are at one end of any of these references (and are not in workspaces forbidden to the user) - and here I also want the chain of their parents. So the query for a and b is very simple, only c is little more complicated#2021-09-0913:05favilaI think this is potentially all one query?#2021-09-0913:05favila(d/q '[:find
(pull ?c [*])
(pull ?ref [:reference/source :reference/target])
(pull ?c2 [:component/$id {:component/parent ...}])
:in % $ ?user ?workspace
:where
[?c :component/workspace ?workspace]
(involved-refs ?c ?ref _ _ ?c2)
(user-accessible-component ?c2 ?user)]
'[[(user-accessible-component [?c] ?user)
;; Donno the criteria here, making one up
[?c :component/workspace ?wk]
[?user :user/workspaces ?wk]]
[(involved-refs ?this-comp ?ref ?this-rel ?other-rel ?other-comp)
[(ground [[:reference/source :reference/target]
[:reference/target :reference/source]])
[[?this-rel ?other-rel]]]
[?ref ?this-rel ?this-comp]
[?ref ?other-rel ?other-comp]]]
unfiltered-db
user
workspace)#2021-09-0913:09favila(a) “all components in a workspace” is clause 1. (b) is clause 2, with a visibility check in clause 3 instead of a filtered db. (c) is the “other” component in the ref, which may or may not be in the same workspace, but is definitely visible#2021-09-0913:09favilaI guess my puzzlement was why bother processing the output of pull expressions to find more components when datalog can do it for you#2021-09-0919:54Jakub Holý (HolyJak)Ah, I have not realized I can have multiple pulls in a single :find. Thank you!
If I understand it correctly, this will work nicely for all components in workspace that are the source or target of a reference whose other end is a user-accessible component (in your example, in the user's workspace). But what about 2 components that are both in workspace ? And what if they have 2 different references between them? And even for the case you describe, with c2 in the user's workspace, we would fetch c2 N times if it is connected to components in the workspace via N different relations, no? And I suppose we want to avoid pulling the same component/reference repeatedly to avoid waisting both processing time and memory?#2021-09-0919:58Jakub Holý (HolyJak)In the workspace of interest, let's call it w1, I can have three components C1, C2, C3 with 9 different references between them plus 6 different references to components outside of w1, where 4 of these are in user-accessible workspace w2. I believe I want to fetch each of C1, C2, C3 and each of the 9+4 references and the 4 external components exactly once, because fetching anything repeatedly means Datomic has to construct the entity repeatedly, costing me more time and more memory. No?#2021-09-0919:59favilaI think it’s sensible to pull in separate steps. The point I was trying to make was that your “what components + refs to include” step is much more directly and clearly expressed as datalog queries rather than pull-walking#2021-09-0920:00favilaby changing the find you can return just the component ids + their refs + components then pull and reassemble#2021-09-0920:02favila(Also, pull has a default cardinality-many limit of 1000--people forget that)#2021-09-0920:28Jakub Holý (HolyJak)Ah, awesome, now I understand! Thanks a ton!
And yes, I absolutely missed the point that pull-many has a limit 😅 That explains why it was so relatively fast 😂#2021-09-0920:56favilasince you nerd-sniped me pretty hard already, this is how I imagine doing it, altering pull-expressions to your taste:{:tag :div, :attrs {:class "message-reaction", :title "heart_eyes_cat"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😻")} " 2")}
#2021-09-0920:56favila(let [refs (d/q '[:find (pull ?ref [:db/id :reference/source :reference/target])
:in % $ ?user ?workspace
:where
[?c :component/workspace ?workspace]
(involved-refs ?c ?ref _ _ ?c2)
(user-accessible-component ?c2 ?user)]
'[[(user-accessible-component [?c] ?user)
;; Donno the criteria here, making one up
[?c :component/workspace ?wk]
[?user :user/workspaces ?wk]]
[(involved-refs ?this-comp ?ref ?this-rel ?other-rel ?other-comp)
[(ground [[:reference/source :reference/target]
[:reference/target :reference/source]])
[[?this-rel ?other-rel]]]
[?ref ?this-rel ?this-comp]
[?ref ?other-rel ?other-comp]]]
user
workspace)
component-refs (reduce
(fn [xs [ref]]
(let [c-source (-> ref :reference/source :db/id)
c-target (-> ref :reference/target :db/id)]
(-> xs
(update [c-source :reference/_source] (fnil conj []) ref)
(update [c-target :reference/_target] (fnil conj []) ref))))
{}
refs)
component-ids (vec (keys component-refs))
component-entity (zipmap
component-ids
(d/pull-many db '[*] component-ids))]
(into []
(map (fn [[cid refs]] (into (get component-entity cid) refs)))
component-refs))#2021-09-1009:30Jakub Holý (HolyJak)Neat, thanks a lot! I learned a great this from this discussion, thank you for your generosity and time!#2021-09-1009:34Jakub Holý (HolyJak)#2021-09-0919:11azHi all, working with in memory com.datomic/datomic-free "0.9.5697" - Running
(d/q '[:find ?e
:keys id
:where [?e :person/name "Bob"]]
(d/db conn))
Getting error
Execution error (IllegalArgumentException) at datomic.query/validate-query (query.clj:316).
Argument :keys in :find is not a variable
Any ideas? Are keys not available in the mem db?{:tag :div, :attrs {:class "message-reaction", :title "white_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✅")} " 2")}
#2021-09-0919:13Joe Lane@U0AJQJCQ1 :keys is not available in free, however it is with https://docs.datomic.com/cloud/dev-local.html , which is available free of charge.#2021-09-0919:49azThank you @U0CJ19XAM #2021-09-0921:10ennWill excising an entity also excise component entities?#2021-09-0921:26FredrikWhen the target of excision is an entity, and a datom to be excised is a reference whose attribute has :db/isComponent true, then the component entity (and all of its attributes) will be excised recursively.
https://docs.datomic.com/on-prem/reference/excision.html#excise-specific-entities#2021-09-0921:28FredrikAlso,
When excising an entire entity, all component entities are also excised, as are all inbound references to the excised entity.
So the answer is yes.#2021-09-0921:38ennperfect, thank you, I missed that section#2021-09-0921:41FredrikYou're welcome 🙂 I was wondering the same at some point, and knew I had read this somewhere before.#2021-09-1019:20azIs the entity api only for on prem or an older api? I was using d/entity in datomic free and after switching to dev-local and client api, I am not able to find this same functionality. Lastly, is there some doc I can refer to on the differences between the free and the dev-local version? I’m seeing some tutorials online that seem to have a mismatch between these versions. Thanks#2021-09-1019:32schmeeyes, entity doesn’t exist in cloud, pull is the suggested alternative. you can find the differences here:
• https://docs.datomic.com/on-prem/cloud/moving-to-cloud.html
• https://docs.datomic.com/on-prem/overview/clients-and-peers.html#2021-09-1020:15azThank you#2021-09-1023:05azHow can one go about debugging a datomic query? I’m hitting the wall with creating a recursive rule and I would love to be able to see why the query engine returns what it does. Is there any tooling out there for this?#2021-09-1023:21FredrikAre you able to share more info about the query?#2021-09-1023:23FredrikAs a general starting point, one can bind variables to partial results within the query and return that#2021-09-1023:59az@U024X3V2YN4 - https://www.loom.com/share/657b9127a79f40b889a777d7e3e2d090#2021-09-1100:01azI thought that would be easier to explain. All code is online: https://github.com/design-driven-research/grand-central/tree/add-datomic-dev-tools#2021-09-1100:04FredrikThanks for the thorough explanation. The first thing that popped to mind was https://docs.datomic.com/on-prem/query/pull.html#limited-recursion-example#2021-09-1100:10azThanks for this, so is pull the only realistic way to go for something like this? Also, will recursive pull only work if using the same attr?#2021-09-1100:10FredrikNo, pull is not the only way: https://github.com/Datomic/mbrainz-sample/blob/master/src/clj/datomic/samples/mbrainz/rules.clj#L11#2021-09-1100:21azThank you @U024X3V2YN4 I’m going to study this#2021-09-1101:08FredrikI think you are right that recursive pull only works on the same attribute. In your case you need to alternate betwee nodes and edges, so I'd probably go with something rule-based#2021-09-1102:33azGot it, thanks for finding that out#2021-09-1103:00FredrikNo problem, shout out if you need any more help!#2021-09-1214:00schmeeI have the following query, which find all the quantities of products in storage and subtracts any reservations:
(defn find-quantities-for-product [product-eid]
(d/q '[:find ?sid ?pid ?q
:keys storage/id product/id quantity
:in $ [?p ...]
:where
[?sp :storage.product/product ?p]
[?s :storage/products ?sp]
[?s :storage/id ?sid]
[?p :product/id ?pid]
[?sp :storage.product/quantity ?sq]
[?r :reservation/product ?p]
[?r :reservation/storage ?s]
[?r :reservation/quantity ?rq]
[(- ?sq ?rq) ?q]]
@db
product-eid))
since Datalog unifies everything, this means that if something is in storage somewhere, but does not have a reservation, it does not get included in the result. I’ve tried every imaginable combination of or, or-join and get-else to accomplish something like “if there is a reservation, get the amount for it, otherwise consider it 0”. Is there a way to do this or do I need to do two queries and manually combine the results?#2021-09-1216:47FredrikWould something like this work?
(def reservations-rules
'[[(reservation-quantity ?p ?s ?rq)
[?r :reservation/product ?p]
[?r :reservation/storage ?s]
[?r :reservation/quantity ?rq]]
[(reservation-quantity ?p ?s ?rq)
(not-join [?p]
[?r :reservation/product ?p]
[?r :reservation/storage ?s])
[(ground 0) ?rq]]])
(defn find-quantities-for-product [product-eid]
(d/q '[:find ?sid ?pid ?q
:keys storage/id product/id quantity
:in $ % [?p ...]
:where
[?sp :storage.product/product ?p]
[?s :storage/products ?sp]
[?s :storage/id ?sid]
[?p :product/id ?pid]
[?sp :storage.product/quantity ?sq]
(reservation-quantity ?p ?s ?rq)
[(- ?sq ?rq) ?q]]
(d/db conn-mem2) reservations-rules product-eid))#2021-09-1216:50FredrikAssuming I got your schema right and trusting a few tests, I think this should work. But I'd also consider splitting it up into multiple queries and move the logic from the query into the code.#2021-09-1220:21favilanm, yes, those are the rules I would suggest too.#2021-09-1220:24FredrikSooner or later, all variables inside a not will need unification. If I understand the semantics correctly, then in this case, the difference between not and not-join is that when invoking the second version of the rule reservation-quantity , we don't want to (indeed cannot) unify ?r .#2021-09-1221:20favilaI think he still needs to unify on ?s#2021-09-1221:20faviladepends on his schema, but it looks like a reservation is found by matching a storage and a product#2021-09-1221:23Fredrik?s is unified before calling reservation-quantity#2021-09-1221:26FredrikSo either the first version of the rule succeeds, with a unified reservation, or the second rule succeeds, but (and this is why using or wouldn't work) both versions of the rule never succeed at the same time#2021-09-1221:28FredrikYour observation on the schema is the assumption I made when testing this too. As an aside, if the product entities referenced the reservations directly, this query would have been a lot easier to write.#2021-09-1221:45schmeeadding ?s to the not-join gives exactly what I need! for my understanding, is it possible to write this without rules? or is this “either or” behavior exclusive to rules? thank you very much @U024X3V2YN4 and @U09R86PA4! 🙏#2021-09-1221:46favilaYou can use or+and. This is just sugar for rules#2021-09-1221:48favilaThe xor behavior is from the two implementations having mutually exclusive matches#2021-09-1221:49favilaThey have the same two clauses, but one not-join-s them the other doesn’t #2021-09-1221:51schmeeyeah, just tried with or/and it gives the same result :thumbsup:#2021-09-1221:52Fredrik@U3L6TFEJF where did you need to add the ?s ? Inside not-join [?p ?s] ?#2021-09-1221:53schmeegotcha, I’ll commit the not-join pattern to my brain! one more question for my understanding: why doesn’t or-join work here? intuitively I’m asking for “this value or 0”, so it seems I’m not understanding what or-join means in Datomic fully :thinking_face:#2021-09-1221:53schmee@U024X3V2YN4 yep!#2021-09-1221:55schmeeahh:
> With or clauses, you can express that one or more logic variables inside a query satisfy *at least one* of a set of predicates.#2021-09-1221:55FredrikYou are wondering why this doesn't work?
(or-join [rq]
(and [?r :reservation/product ?p]
[?r :reservation/storage ?s]
[?r :reservation/quantity ?rq])
[(ground 0) ?rq])#2021-09-1221:56FredrikIf a product has a reservation, it will return the quantity both with and without the reservation applied#2021-09-1221:56schmeeyeah, I was thinking about or as “this or that, but not both”, but reading the docs it’s clear that it matches at least one#2021-09-1221:56FredrikExactly 😉#2021-09-1221:57schmeeclarity achieved, cheers to you both! 🌸#2021-09-1221:59FredrikFor my own clarity: Why was adding ?s in the not-join needed?#2021-09-1222:02schmeefor whatever reason, without ?s in the not-join products in storages with no reservation don’t get included in the result#2021-09-1222:07FredrikOf course, that makes sense. Not having ?s in the vector means the unification of ?s to only those storages with reservations "escapes" the not-join .{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-09-1313:43ghadialways always always pass the db as the primary argument to a function that queries a database.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-09-1315:28schmee@U050ECB92 yessir, this is still very much at the sketching stage so I’m taking some liberties :thumbsup:#2021-09-1223:12xlfeIs there a way to use fulltext search on a heterogeneous tuple with some string types? I can transact the following schema, but can't figure out how to get fulltext to match the entity
{ :db/ident :patient/name
:db/valueType :db.type/tuple
:db/cardinality :db.cardinality/many
:db/fulltext true
:db/tupleTypes [
:db.type/long
:db.type/keyword
:db.type/string
:db.type/string
:db.type/string
:db.type/string
:db.type/instant
:db.type/instant]}#2021-09-1223:32xlfeMy tests would seem to indicate dispite the succesful schema txn, there is no fulltext index generated for entities with :patient/name#2021-09-1300:34FredrikIt seems that fulltext only applies to :db/valueType :db.type/string .#2021-09-1300:35FredrikI hope others can come up with a better workaround, but if you want to keep the tuple model I can suggest something like this:
(def schema [{:db/ident :
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one}
{:db/ident :
:db/valueType :db.type/keyword
:db/cardinality :db.cardinality/one}
{:db/ident :
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/fulltext true}
{:db/ident :
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/fulltext true}
{:db/ident :
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/fulltext true}
{:db/ident :
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/fulltext true}
{:db/ident :
:db/valueType :db.type/instant
:db/cardinality :db.cardinality/one}
{:db/ident :
:db/valueType :db.type/instant
:db/cardinality :db.cardinality/one}
{:db/ident :patient/name
:db/valueType :db.type/tuple
:db/cardinality :db.cardinality/one
:db/fulltext true
:db/tupleAttrs [:
:
:
:
:
:
:
:#2021-09-1300:37FredrikThen you can run queries like this:
[:find ?long
:where
[(fulltext $ : "s1") [[?e ?val]]]
[?e :patient/name ?pn]
[(untuple ?pn) [?long]]]#2021-09-1306:32danbuneaSuper newbie question: I have a schema where I need a custom attibute in Datomic Cloud:
[
{
:db/ident :user/id
:db/valueType :db.type/keyword
:db/unique :db.unique/identity
:db/cardinality :db.cardinality/one
}
{:db/ident :user/name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:field/label "Name"
}
]
the result:
; Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
; Unable to resolve entity: :field/label
#2021-09-1306:39Fredrik:field/label must itself be declared in the schema
{:db/ident :field/label
:db/valueType :db.type/string
...}#2021-09-1306:40FredrikThe error comes from trying to set the value of an attribute, in this case :field/value, which Datomic is not yet aware of#2021-09-1307:40danbuneaHi @U024X3V2YN4 , thanks 🙂#2021-09-1405:04steveb8nQ: I remember someone mentioning that using large/nested pull expressions in queries is slow. is that documented or explained somewhere? what’s the best alternative?#2021-09-1405:16steveb8nI can think of these alternatives:#2021-09-1405:17steveb8n1. query to get ids, map over each using d/pull#2021-09-1405:17steveb8n2. return maps#2021-09-1405:17steveb8nboth assume in-mem access i.e. peer or Ion execution env#2021-09-1405:39favilahttps://docs.datomic.com/cloud/query/query-executing.html#qseq#2021-09-1405:39favilaIf you can consume and release results lazily, you can use qseq to delay realization of the pull.#2021-09-1405:42steveb8nah ok. so it’s not direct perf but instead about memory consumption. I presume this means a transducer can improve perf and mem when processing the results too#2021-09-1405:49steveb8nthanks @U09R86PA4#2021-09-1505:33caleb.macdonaldblack#2021-09-1505:33caleb.macdonaldblackWhere can I find exceptions in my cloudwatch logs for datomic ions? I’m having issues where I cannot see compilation exceptions or runtime exceptions. My logs aren’t showing any errors and then my deploy just fails#2021-09-1511:42Ivar RefsdalAre there any known bad version combos of the Datomic peer library and yada/aleph?
Both libraries uses netty under the hood.
For example one in-house project uses aleph 0.4.6 which bundles individual netty components with version 4.1.25.Final.
The same project uses Datomic version 0.9.5930 which bundles io.netty/netty-all 4.1.32.Final.
Yes I know it's an old version, but it seems to work. We've recently, after upgrading Datomic and aleph, had several problems, so we downgraded again.
For the record this project is built using boot uberjar and I don't know what/which concrete netty version is actually being used, if not some of both.
I'd rather get rid of yada/aleph (and boot), but I'm not sure that is happening anytime soon.
Thanks!#2021-09-1514:59markgdawsonIs there a way with datalog to return the value of a field if it exists and a default value if it doesn't? 🙂
As a (simplified!) example, in:
(d/q '[:find ?active
:in $ ?name
:where [?e :thing/name ?name]
[?e :thing/active ?active]]
db name)
When :thing/active is not set, nothing is returned. I'd like to return everything, but when :thing/active is not set I'd like to return true (as a default). Is that possible?#2021-09-1515:16futuroThis SO https://stackoverflow.com/questions/21101259/find-entities-with-missing-attributes-in-datomic answer seems promising#2021-09-1515:23markgdawsonThanks @U0JJ68RBR! That works for me. 🙂{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-09-1515:16Tatiana KondratevichHi. Tell me what the error can mean:
message "Data did not conform"
when doing push.I am at a loss to understand where to look for the error, help.#2021-09-1517:18Jakub Holý (HolyJak)Push? Of code to Ion?#2021-09-1518:28Tatiana KondratevichYes, I run clojure -A:ion-dev '{:op :push}'#2021-09-1518:57Jakub Holý (HolyJak)I see, and https://docs.datomic.com/cloud/troubleshooting.html#troubleshooting-ion-push does not help 🙂
It looks like an error from Clojure Spec. Perhaps re-check your config file(s) for correctness?#2021-09-1519:57Tatiana Kondratevich@U0522TWDA
ion-config.edn looks like ok
{:lambdas {:ensure-sample-dataset
{:fn starter.lambdas/ensure-sample-dataset
:description "creates database and transacts sample data"}
:get-schema
{:fn starter.lambdas/get-schema
:description "returns the schema for the Datomic docs tutorial"}
:get-items-by-event
{:fn starter.lambdas/get-items-by-event
:description "return inventory items by event"}
:get-all-items
{:fn starter.lambdas/get-all-items
:description "return all inventory items"}
}
:http-direct {:handler-fn starter.http/get-items-by-event}
:app-name "<name my project>"}
what else should i check?#2021-09-1520:02Jakub Holý (HolyJak)I don't know :(#2021-09-1519:44jaretHi All! https://forum.datomic.com/t/new-client-pro-1-0-72-and-new-console-0-1-227/1945#2021-09-1519:44jarethttps://forum.datomic.com/t/datomic-1-0-6344-now-available/1944{:tag :div, :attrs {:class "message-reaction", :title "tada"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🎉")} " 6")}
#2021-09-1601:57stuartrexkingDoes anyone have any advice on managing application lifecycle events with Ions? I want to be able to start and stop Kafka consumers in a reliable way.{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 2")}
#2021-09-1601:58stuartrexkingIs there an idiomatic way of managing services, like RDB connection pools etc.#2021-09-1608:40steveb8nQ: I had prod downtime with Datomic Cloud today. first time ever. I’m still trying to figure out what went wrong but killing the instances made it come back up. Trying to learn how to diagnose this…#2021-09-1608:42steveb8nthe only thing I can see is the indexer memory is high but normally it drops down when it reaches its ceiling. after the restart it jumped straight back to the prev level. Is this normal or should it restart at a min level (similar to jvm free)?#2021-09-1608:43steveb8nother than that, I’m still combing through logs to try and find a signal that explains it#2021-09-1608:55steveb8njust found “Too many open files” warnings in the logs. I am using aws client lib in Ions. could this be some kind of leak?#2021-09-1609:03steveb8npretty sure it’s a socket leak. probably due to aws client mis-use on my part. I wonder if there is a way to measure open “files” as a metric so I can monitor this#2021-09-1609:04steveb8nany advice you might have on how to find the leak would be much appreciate#2021-09-1609:06steveb8nanother useful idea would be a way to cause cloud instances to be killed automatically when the “Too many open files” point is reached. suggestions for this would also be much appreciated#2021-09-1609:06steveb8nI will try and find the leak myself but suggestions are welcome#2021-09-1609:30steveb8nI’m assuming this is an aws client problem but is there any other reason that “Too many files” warnings can happen in cloud?#2021-09-1613:59jaretSome folks encountering this issue on the forums have been attempting to isolate by using a file leak detector on the box and isolating the box to a standalone query group. https://forum.datomic.com/t/transactor-stops-responding-with-too-many-open-files-error/1863/3#2021-09-1614:00jaretI'd also put in that you have access to me with Datomic support. I'd be happy to help diagnose and troubleshoot. If you want to log a case shoot an e-mail to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> or <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>#2021-09-1622:10steveb8nthanks @U1QJACBUM I’ve logged a ticket#2021-09-1611:57vlaaadI thought databases are values…
(let [db (get-db ...)]
(=
(d/as-of db #inst "2020")
(d/as-of db #inst "2020")))
=> false#2021-09-1612:01vlaaad(let [conn (get-conn ...)
db1 (d/db conn)
db2 (d/db conn)]
[(= db1 db2)
(= (.-client db1) (.-client db2))
(= (.-conn db1) (.-conn db2))
(= (.-info db1) (.-info db2))])
=> [false true true true]
why?#2021-09-1612:14FredrikWhich Datomic are you using? datomic-free, datomic-pro, client?#2021-09-1612:29jarrodctaylorAt present, the Datomic API https://docs.datomic.com/client-api/datomic.client.api.html#var-db about database equality.#2021-09-1612:29jarrodctaylorhttps://forum.datomic.com/t/database-value-equality/1762/3#2021-09-1612:33vlaaadthanks#2021-09-1612:36vlaaadit confirms, although not explains the behavior#2021-09-1612:53favilaThe point of db as a value is that data retrieval from it is (mostly) repeatable. I’m not sure what the value of db equality would be because what kind of equality you want depends on what you are doing. Eg suppose dbs could compare, what would you use that for?#2021-09-1613:02emccuemaybe caching - if you asked for this value/query on the same db value, don't ask datomic we already know#2021-09-1613:03favilaDbs have a unique id, basis-t, ishistory, and optional as-of-t, since-t, and filter. You could say that if these are all equal, the dbs are equal, but that means some “equivalent” dbs are not equal (eg if one uses as-of to travel back to the same t as another unfiltered later db)#2021-09-1613:03emccuei mean yeah - equals in the more general sense should return equals if all observable properties of a thing are definitely equal#2021-09-1613:04favilaMaybe caching, but how likely is it that you have an equal but not identical db in this caching context?#2021-09-1613:04emccueits more or less accepted that if you don't know if they are equal or not - or don't want to guarantee it - they aren't#2021-09-1613:05emccuebut if all those things are equal you would know for sure they are equal so its odd#2021-09-1613:06emccue> how likely is it that you have an equal but not identical db in this caching context?
lets say they query like graphql.#2021-09-1613:06favilaBut the basis-t in a production db is constantly advancing#2021-09-1613:07emccueuser_as_of(year: 2020) {
name,
age
}
user_as_of(year: 2020) {
name
friends {
name
}
}#2021-09-1613:07emccueyou could get db/as-of 2020 for each of these#2021-09-1613:07emccuebig caveat is that i'm stupidly new to this so i'm not sure what basis-t is exactly#2021-09-1613:08favilaBasis-t is the latest T the db contains#2021-09-1613:10emccueokay so if a db contains that the utility might go down somewhat#2021-09-1613:10favilaA db must necessarily contain that#2021-09-1613:10emccuei dunno how likely it is to do 2 (d/db ...) calls before a write happens#2021-09-1613:11emccuebut it could be likely enough to warrant caching - idk#2021-09-1613:11favilaDepends on your system, but I’d say not likely enough to make caching useful#2021-09-1613:12emccueokay - but i think you'll agree that there is at least hypothetically a use#2021-09-1613:13emccueand datomic isn't widely or openly developed on so its kinda hard to get a feel for probabilities#2021-09-1613:17favilaWell I’ll put it this way, I’ve used datomic on-prem in production for over 5 years on massive 10bil datom+ databases. I think this equality makes sense in an abstract way, but I haven’t encountered any practical use for it. In practice, you’ll have execution contexts share an identical db anyway#2021-09-1613:17emccuealso if you read the clojure docs through as to "what is a value" and then see this "value" be not-a-value it would throw you for a loop#2021-09-1613:19favilaI think there’s the opposite problem too, which is they may assume too much of db inequality or equality#2021-09-1613:21favilaEg that equal dbs produce equal projections always (they may not due to non-determinism in how the projections are made) or that unequal dbs produce unequal values#2021-09-1613:24emccue> non-determinism in how the projections are made
Wait, what#2021-09-1613:26favilaEg pull expressions limits, calling impure functions in your queries, hash changes among versions, that sort of thing#2021-09-1613:31favilaEqual db guarantees that the datom sets you would get are the same, but most of what we do (query and pull) is projecting out of datoms and not part of the db per se, and the guarantees get weaker for all the usual reasons that (f x)=>y may not produce the exact same y for all time in all places#2021-09-1614:16vlaaadbtw graphql and d/as-of is exactly the right context for my question#2021-09-1614:18favilaThe as-of of a “fresh” db (the result of (d/db conn) ) is always nil.#2021-09-1614:19favilaand the basis-t of the db advances with each write#2021-09-1614:20favilaare you using an as-of db with graphql resolvers? My expectation is that a d/db is established in the resolver context at the beginning of the request and is the same through the entire request#2021-09-1614:28vlaaadI do “changelog” that returns a series of items from different as-of dbs#2021-09-1614:29vlaaadso no, in my case db is not in a context#2021-09-1614:32vlaaadI haven’t implemented it yet, but I think it might be possible for me to ensure identical as-of dbs for different items sharing the same tx…#2021-09-1615:37favilaCould you explicitly use T as a cache key? note the “effective T” of a database is (or (d/as-of-t db) (d/basis-t db)) iff since-t, isHistory, and filter are all unset and as-of-t >= basis-t#2021-09-1612:32vlaaadclient…#2021-09-1612:36vlaaadStrange there is no explanation for that, because there is even a talk by Rich — Database as a Value…#2021-09-1613:48greghttps://github.com/Datomic/mbrainz-sample/blob/master/schema.edn
At the top of the file defining schema, authors made a note about "enum attributes" and "super-enum attributes". Sounds like a best practice.
I read the definition posted there, in that file, but I can't get it.
What is the difference between these two? Could you give me some examples?#2021-09-1614:03gregIn another repo, https://github.com/Datomic/mbrainz-importer/ I've found two dataset files that refer to the same terminology:
https://github.com/Datomic/mbrainz-importer/blob/master/subsets/batches/super-enums.edn
https://github.com/Datomic/mbrainz-importer/blob/master/subsets/batches/enums.edn
Looking at these files, it looks like there are only two differences:
• super enums are just more numerous then simple enums
• simple enum hold only name attribute while super-enum holds more of them
Is that correct distinction between these two?#2021-09-1614:41FredrikSuper enums are "global" entities referenced by several types. For instance, :artist/country and :label/country can point to the same country entity. Each regular enums is only referenced by a single type, for instance artist/gender is only referenced by an artist.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-09-1614:56mokrHi, I know I can return a collection of values from a query with [?name …] , but what do I do if I have two such variables and want the concatenated result?
Illustrated by this contrived example:
[:find [?friend-name ?sibling-name ...] ;; Just one of the many alternatives I've tried
:in $ ?person-id
:where
[?person :person-id ?person-id]
[?person :friend ?friend]
[?person :sibling ?sibling]
[?friend :name ?friend-name]
[?sibling :name ?sibling-name]]
In other words I’m trying to follow two many-relationships and return the same attribute from all those entities.
Any help appreciated as I’m out of variations to try and I only seem to find documentation for the single variable variant.#2021-09-1614:59schmee:find ?friend-name ?sibling-name will return a set of [friend-name sibling-name] tuples, is that what you’re looking for?#2021-09-1615:03mokrHmm, thanks, that was simple. Maybe I’ve been experimenting with bad data here.
So, I will get a tuple of two collections that I can then concat afterwards, right?#2021-09-1615:06schmeeyou’ll get an array of tuples: [[f1 s1], [f1 s2], [f2 s2]... and so on#2021-09-1615:10mokrBut all in all it will be the names of all the people that are either a sibling, friend or both of the person identified by person-id?#2021-09-1615:18FredrikThis will return all the tuples [friend-name sibling-name]#2021-09-1615:19FredrikIt's is maybe not what you want?#2021-09-1615:24mokrIt’s always tricky to use a contrived example to try to illustrate. What I need is to follow two or more refs from an entity, where both refs are cardinality many. From the entities those links/edges leads to I need to extract an attribute.#2021-09-1615:26FredrikWhat if friend and sibling both point to the same person? Do you want a query that excludes this possibility?#2021-09-1615:26mokrNo, exclusion is not needed.#2021-09-1615:34mokrThe refs essentially represents different reasons for targeting entities.
Adding another example might make it worse, but here goes:
A PC-technician gets the task to fix some computers. The task entity has refs to computer entities. Refs can be eg. :broken-down or :user-complaint, but all the tech needs is to get :serial from all the referenced computers to know which ones to work on. In this case a broken down computer could have a user complaint, but both leads to the same serial.#2021-09-1615:57FredrikCould you use an or clause?
(d/q '[:find [?id ...]
:in $ ?person
:where
(or [?person :person/sibling ?friend-or-sibling]
[?person :person/friend ?friend-or-sibling])
[?friend-or-sibling :person/id ?id]]
db (d/entid pro-db [:person/id "1"]))#2021-09-1616:00FredrikThis will find those entities which satisifes either of the clauses (or both), and returns a vector of their value for some attribute (in this case :person/id )#2021-09-1616:03mokrThanks, that looks like exactly what I was after. Reads better as well{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-09-1708:07hdenIs there anyway to access the current revision (`:rev` from the deploy command) from an ion?#2021-09-1713:12souenzzoI asked it sometime ago. The short answer is no.
You can use aws-api and dig for the rev via load balancers or cloud deploy things
or you can do like me and always do an extra-commit with a ID before a deploy{:tag :div, :attrs {:class "message-reaction", :title "smiling_face_with_tear"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🥲")} " 2")}
#2021-09-1714:12hdengot it, thanks#2021-09-1714:15souenzzoalso, the meaning of "rev" is not so simple.
during deploy/rollbacks you have multiple instances, with different revs.
what you expect during these cases?
the newest? relative to current machine?#2021-09-1720:55hdenin my case, the rev deployed to the current machine#2021-09-1915:45keymoneis it possible to run datomic console against dev-local database?#2021-09-1919:24favilaNo. Datomic console uses the peer api. Dev local only provides the client api{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-09-2008:00keymoneso the best way to explore dev-local db is to just run queries manually?#2021-09-2013:13Joe LaneUse REBL#2021-09-2008:00keymoneso the best way to explore dev-local db is to just run queries manually?#2021-09-2008:11MaravedisHello. I have a noob problem but I'm not really having luck reading the documentation: I just want to pull all the datoms of a transation. How do I do that?#2021-09-2009:53andrionitx-range is probably going to be better in this case#2021-09-2010:16MaravedisI found tx-range and tx-data and it was exactly what I needed 🙂 Thanks.#2021-09-2008:12MaravedisToday I'm doing:
(d/q '[:find ?pl ?v ?added
:in $ ?tx
:where
[?pl :purchase-line/quantity-received ?v ?tx ?added]]
(d/history db) transaction-id)
and it's terribly inneficient (like 20s).#2021-09-2010:25Lennart Buitso; there is no index on t (or tx). Running this query, datomic has no choice but to use AEVT or AVET indices. Effectively walking through the index for that attr, for each point in time.#2021-09-2010:27Lennart Buitsince you bind ?tx in your in, consider using (d/as-of db transaction-id)#2021-09-2010:27Lennart Buitor otherwise, you can probably also use d/tx-range to get that specific transaction from the log#2021-09-2010:29Lennart BuitDoes that make sense?#2021-09-2014:12MaravedisIt makes sense. Thanks for your help 🙂#2021-09-2017:15gregHow to retract mistakenly added :db/ident?
E.g. i added such a transaction by mistake:
(d/transact conn {:tx-data [{:db/ident :release.type/album :release.type/name "Album"}]})
How to retract it so :release.type/album is not longer available in the latest db:
(d/q '[:find ?e :where [?e :db/ident :release.type/album]] db) ;; => ()
#2021-09-2019:44FredrikYou can use :db/retractEntity: https://docs.datomic.com/on-prem/transactions/transaction-functions.html#dbfn-retractentity{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-09-2017:29keymonetrying to understand the difference between lookup refs and unique identity (reading https://docs.datomic.com/on-prem/schema/identity.html#lookup-refs), is there a difference between upserting
{:person/email "
versus
{:db/id [:person/email "
?#2021-09-2018:24Lennart BuitI think the second will raise when {:tag :mailto:joeexample.comjoeexample.com, :attrs nil, :content nil} is not in your db#2021-09-2018:28Lennart BuitYeah it does, so the first is a proper upsert, the second can only update#2021-09-2021:42keymonethanks!#2021-09-2105:04gregI'm experimenting with datomic dev-local, working out on schema, and trying the API. I tried to excise an attribute and I got an error:
:db.error/unsupported Excision is not supported in Datomic Cloud.
Is the excision available only in datomic on-premise?#2021-09-2111:14favilaYes{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-09-2110:45gregThe below query find all the idents being attributes:
[:find ?ident
:where
[?e :db/ident ?ident]
[_ :db.install/attribute ?e]]
How to transform it to "find all idents that are NOT attributes"?#2021-09-2111:11gregThe answer is:
[:find ?ident
:where
[?e :db/ident ?ident]
(not [_ :db.install/attribute ?e])]
😅{:tag :div, :attrs {:class "message-reaction", :title "100"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💯")} " 2")}
#2021-09-2116:36joshkhcan a query group be deployed to a different region from the primary compute group? i suspect the answer is no, but i'm checking just in case i've missed something#2021-09-2116:43joshkh^ one use case would be to deploy an Ion lambda to handle S3 events in another region, where only lambdas in that bucket's region can be associated with a trigger#2021-09-2116:53ghadithey cannot#2021-09-2117:57Ivar RefsdalIs there some recommended java version to use with datomic transactor on-prem? Java 11?#2021-09-2210:59gregRegarding the Pull API and its last parameter: entity id.
I noticed that all the examples refers to an entity id, a number. But if something is :db/ident or is :db.unique/identity attr, the pull can take more verbose param:
(d/pull db '[*] [:country/code "GB"]) ;; :country/code is an attr with :db.unique/identity
(d/pull db '[*] :temp.periodicity/daily) ;; :temp.periodicity/daily is :db/ident-based enum
Q1: What this alternative notation mean? Looks like [:country/code "GB"] and :temp.periodicity/daily are interchangeable with a number eid. Are they?
On the other hand, if I want to pull entity where composite tuple is :db.unique/identity it doesn't work similary:
(d/pull db '[*] [:temp/location+periodicity+date [[:country/code "GB"] :temp.periodicity/daily #inst"2021-09-18"]]) ;; <= doesn't work
(d/pull db '[*] [:temp/location+periodicity+date [83562883711079 74766790688854 #inst"2021-09-18"]]) ;; <= works fine
Q2: Why it doesn't work the same with composite tuples for the elements of the tuple?#2021-09-2211:11Lennart BuitQ2 has bitten me before too. There appear to be some cases where you can’t use a db/ident in place of an entity id.
Another case is :db/cas, when old-val can be an entity id, but not an ident.#2021-09-2211:49favilaQ1: these are entity identifiers and they are all interchangeable in most contexts. D/entid (in peer api) resolves them to ids. In transaction data they will be resolved at the transactor rather than the peer, which makes them useful for preventing update and delete races#2021-09-2211:54favilaQ2: but there are exceptions, like inside tuple values. Another is in queries where the attribute is not statically known. The reason is that you need type information to know whether the slot in the tuple is a ref or not. (I’m sure datomic could be extended to figure it out, but for whatever reason it hasn’t been)#2021-09-2211:56favilahttps://docs.datomic.com/on-prem/schema/identity.html#entity-identifiers#2021-09-2223:27gregThank you @U09R86PA4 @UDF11HLKC#2021-09-2321:23gregGoing back to this example:
(d/pull db '[*] [:temp/location+periodicity+date [[:country/code "GB"] :temp.periodicity/daily #inst"2021-09-18"]]) ;; <= doesn't work
(d/pull db '[*] [:temp/location+periodicity+date [83562883711079 74766790688854 #inst"2021-09-18"]]) ;; <= works fine
Since entity identifiers are not interchangeable in the context of a tuple, am I correct that this ⬇️ is the shortest way to pull such an entity details (shortest without using rules):
(d/q '[:find (pull ?e [*])
:where
[?ecountry :country/code "GBP"]
[?eperiodicity :db/ident :temp.periodicity/daily]
[(tuple ?ecountry ?eperiodicity #inst"2021-09-16") ?tup]
[?e :temp/location+periodicity+date ?tup]] (get-db))
I mean, the pull is just an example. But ultimately I need to find ?e, and for such a tuple I need to write 4 lines (4 facts, 4 data patterns - tbh I'm not sure how to name these :where vectors).
Since it is not that nice, people probably use rules, right?
(def rules
'[[(find-temp ?e ?country ?periodicityident ?date)
[?ecountry :country/name ?country]
[?eperiodicity :db/ident ?periodicityident]
[(tuple ?ecountry ?eperiodicity ?date) ?tup]
[?e :temp/location+periodicity+date ?tup]]])
(d/q '[:find (pull ?e [*])
:in $ %
:where
(find-temp ?e "GB" :temp.periodicity/daily #inst"2021-09-16")]
(get-db) rules)
Am I correct or am I still missing here some bits (in terms of accessing entities identified by composite tuples)?#2021-09-2211:49Ivar RefsdalI've recently seen
2021-09-22 06:49:19.112 INFO datomic.update - {:event :transactor/admin-command, :cmd :request-index, :arg "xxx-7-24fef96f-2e3f-4369-acb3-4ae67b5f91df", :result {:queued "xxx-7-24fef96f-2e3f-4369-acb3-4ae67b5f91df"}, :pid 13582, :tid 772}
2021-09-22 06:49:19.263 INFO datomic.update - {:index/requested-up-to-t 17072440, :pid 13582, :tid 129}
2021-09-22 06:49:19.655 ERROR datomic.process - {:message "Terminating process - Timed out waiting for log write", :pid 13582, :tid 776}
in our transactor logs. Does that mean that data-dir is not writable?
Or is it something else?#2021-09-2214:22jaretWhat version of Datomic are you using? What underlying storage? Is this a new system or an existing system? If you'd like you can shoot me a support case by e-mailing <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> or <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> and we can share logs/config there and I can help.
This message can be thrown for a variety of reasons: unavailability of storage, transactor failover, gc pauses etc.#2021-09-2219:04Ivar RefsdalThanks Jaret. This is version 1.0.6344. PostgreSQL is the underlying storage.
It's an "old" system that's been unstable for some time.
I got that error after issuing datomic.api/request-index. A repeated request-index also restarted the transactor.
After changing the data-dir to an absolute path and some minor Dockerfile changes, I did not get an error on request-index - it worked fine.
If this is the current behavior of Datomic, i.e. that an incorrect data-dir may restart the transactor at a future time, I think it should be fixed / detected on start-up that data-dir is not writable.
I will file an issue if the transactors stops working again. Thanks!#2021-09-2219:09jaretIf you can't write to the data-dir the transactor will failover. Any "restart" is going to be whatever you've implemented to facilitate high availability.#2021-09-2307:38Ivar RefsdalWhat does "failover" concretely mean? We have a single container instance running, no HA configured. So far things looks good today.#2021-09-2314:43jaretSorry, if you are only running a single transactor pointed to the storage system then the transactor will just fail. Failover occurs when you have two transactors (active and standby) monitoring storage heartbeat. You should see reported :heartbeat failure -- unable to write heartbeat to storage. I will add we do https://docs.datomic.com/on-prem/operation/deployment.html given that Datomic is a distributed system and given proper process isolation you're going to encounter transactor failure at some point (i.e. network latency or GC pauses etc) and https://docs.datomic.com/on-prem/operation/ha.htmlis the way to provide resiliency.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-09-2418:29Ivar RefsdalThanks Jaret. I've encountered another issue/error now:
2021-09-24 16:31:03.076 WARN datomic.update - {:message "Index creation failed", :db-id "pvo-backend-service-stage-2-4220cac7-8b82-4f5e-af48-fc52303bb641", :pid 25625, :tid 93}
java.lang.Error: Timed out waiting to segment log.
at datomic.update$process_request_index$fn__23960$fn__23961.invoke(update.clj:183)
at datomic.update$process_request_index$fn__23960.invoke(update.clj:181)
at clojure.core$binding_conveyor_fn$fn__5772.invoke(core.clj:2034)
at clojure.lang.AFn.call(AFn.java:18)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
not sure what/why/how this would happen?
Redeploying the container (to a different node I suppose, picked by azure) solved the problem.#2021-09-2211:56manutter51I don’t know what that means but something makes me wonder if you’ve run out of disk space somewhere?#2021-09-2214:03Tatiana KondratevichHi, Can somebody know how solve this problem:
{:message "Data did not conform",
:class ExceptionInfo,
:data
#:clojure.spec.alpha{:problems
({:path [:local :home],
:pred clojure.core/string?,
:val nil,
:via
[:cognitect.s3-libs.specs/local
:cognitect.s3-libs.specs/home],
:in [:local :home]}),
:spec
#object[clojure.spec.alpha$map_spec_impl$reify__1998 0x5dfc2a4 "
I can't figure out where he means the error.#2021-09-2214:05Alex Miller (Clojure team)you have a nested map {:local {:home ....}} - the predicate there is string? but it's getting nil (which is not valid)#2021-09-2214:06Alex Miller (Clojure team)if nil should be allowed, then that spec should be (s/nilable string?) instead#2021-09-2214:06Alex Miller (Clojure team)none of this seems related to datomic afaict#2021-09-2214:07Tatiana Kondratevichplease tell me where exactly can I check this map?#2021-09-2214:45Alex Miller (Clojure team)I don't know - what did you do to see the error?#2021-09-2215:44Tatiana Kondratevich@U064X3EF3 I use clojure -A:ion-dev '{:op :push :region "eu-central-1" }'#2021-09-2215:55Alex Miller (Clojure team)I don't have enough knowledge about what Datomic is doing there to answer that so will need to wait for someone from the Datomic team to look at it#2021-09-2214:28ennHello … with Datomic Analytics, is there any way to either:
• see the database as of a particular t
• when I run a query, get the t associated with the results that I’m seeing
?#2021-09-2223:43gregWhat do you use for migrations?
In a typical scenario the prod is not only deployment target. We might have separate dev, test, pre-prod deployment targets, and each of them holding separate Datomic db.
The goal? Keep the db schema as code, and apply them in each deployment target within a CI/CD pipeline.
In an SQL world I would integrate something like https://github.com/flyway/flyway. Here, there is not much on this topic if you google it. I've found a couple of libraries like https://github.com/luchiniatwork/migrana and https://github.com/avescodes/conformity but like majority of clj libs doesn't look very active (I know, they might just work and nothing more is left to do).
Do you use any of these, or maybe something else?#2021-09-2307:39Ivar RefsdalWe've used conformity for a number of years without any issues{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-09-2307:40Ivar RefsdalNot sure how that fits with db schema as code#2021-09-2308:47Ivar RefsdalIf you want somewhat shorter schema syntax that fits in an edn file, you may look at https://github.com/ivarref/datomic-schema (written by myself, based off cognitect-labs/vase){:tag :div, :attrs {:class "message-reaction", :title "heart"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("❤️")} " 2")}
#2021-09-2310:42Jakub Holý (HolyJak)For sql, Mongo we use Ragtime and there is also ragtime.datomic{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-09-2223:46ghadiI think you've already made the correct call to do any transactions such as schema installation during the CI/CD pipeline#2021-09-2223:46ghadiLots of people do it during instance startup and that's just got so many problems{:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 4")}
#2021-09-2314:29Daniel JompheVery interesting. Would it be great to mention in Datomic's doc site under e.g. Operation?#2021-09-2223:47ghadiEsp when you have more than one instance#2021-09-2223:48ghadiBasically it's more important when than how{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-09-2321:23gregGoing back to this example:
(d/pull db '[*] [:temp/location+periodicity+date [[:country/code "GB"] :temp.periodicity/daily #inst"2021-09-18"]]) ;; <= doesn't work
(d/pull db '[*] [:temp/location+periodicity+date [83562883711079 74766790688854 #inst"2021-09-18"]]) ;; <= works fine
Since entity identifiers are not interchangeable in the context of a tuple, am I correct that this ⬇️ is the shortest way to pull such an entity details (shortest without using rules):
(d/q '[:find (pull ?e [*])
:where
[?ecountry :country/code "GBP"]
[?eperiodicity :db/ident :temp.periodicity/daily]
[(tuple ?ecountry ?eperiodicity #inst"2021-09-16") ?tup]
[?e :temp/location+periodicity+date ?tup]] (get-db))
I mean, the pull is just an example. But ultimately I need to find ?e, and for such a tuple I need to write 4 lines (4 facts, 4 data patterns - tbh I'm not sure how to name these :where vectors).
Since it is not that nice, people probably use rules, right?
(def rules
'[[(find-temp ?e ?country ?periodicityident ?date)
[?ecountry :country/name ?country]
[?eperiodicity :db/ident ?periodicityident]
[(tuple ?ecountry ?eperiodicity ?date) ?tup]
[?e :temp/location+periodicity+date ?tup]]])
(d/q '[:find (pull ?e [*])
:in $ %
:where
(find-temp ?e "GB" :temp.periodicity/daily #inst"2021-09-16")]
(get-db) rules)
Am I correct or am I still missing here some bits (in terms of accessing entities identified by composite tuples)?#2021-09-2311:36daniel.spanielWill Datomic / Can Datomic
-> ever allow us to retract / change <- attribute definitions that are not used yet?
I mess up quite often in defining an attribute as integer instead of double, and then once it's transacted , Datomic does not allow change of attribute type or even to retract it completely.
Seems like if the attribute is not even used yet, we could make that change?#2021-09-2314:33jaretYou can never alter _:db/valueType_, _:db/tupleAttrs_, _:db/tupleTypes_, or _:db/tupleType_. But you can change the schema of an attribute. Is there a specific example you have? I ask because you can change the :db/ident of an attribute and in that way you can technically re-purpose or re-use the ident you are attached to. We discuss this in https://docs.datomic.com/cloud/schema/schema-change.html:#2021-09-2314:34jaret> We don't recommend re-purposing an old `:db/ident`, but if you find you need to re-use the name for a different purpose, you can define the name again as described in attribute-definition. This re-purposed `:db/ident` will cease to point to the entity it was previously pointing to and ident will return the newly installed entity instead.#2021-09-2314:34jaretIf you have never used the attribute this kind of re-purposing through re-using the :db/ident seems like what you would want to do.#2021-09-2314:34jaret@dansudol are you using Cloud or on-prem?#2021-09-2314:44daniel.spaniel@jaret we are using Cloud.
I don't want to repurpose the :db/ident.
I want to alter the :db/valueType flat out.
But only when it's unused anywhere.
That seems reasonable right? No datoms using db/ident = ok to change that valueType?#2021-09-2314:45daniel.spanielI know this is an iilegal proceedure ( believe me I know ) and it hurts me alot because if I make one mistake on schema .. like this .. there is no do over#2021-09-2315:10jaret@dansudol I feel for you on this. Schema design is hard in any DB and I strongly recommend to anyone that you work out new schema in a staging environment or a test DB before moving schema to production. Even better you can experiment at https://docs.datomic.com/cloud/dev-local.html. Use import-cloud to get your DB locally. If you haven't used the attribute in question, here is exactly what you can do to save the name and add a new schema attribute with the valuetype you want:
;start with your attribute foo/bar
{:db/ident foo/bar
:db/valueType :db.type/long}
;archive foo/bar because you realize later that you do not want this attribute to be a long
[{:db/id :foo/bar
:db/ident :foo/bar-no-longer-used}]
;transact your new schema using your favorite name foo/bar with a new value type double
{:db/ident foo/bar
:db/valueType :db.type/double}#2021-09-2913:38daniel.spanielAs you probably already know @jaret this worked .. I am alittle shocked . but I will take this win .. thanks again!#2021-09-2913:40jaret@dansudol It's important to understand that you aren't getting rid of your initial schema mistake with this approach. The original schema is still there. So you can always see the history of this move.#2021-09-2913:40jaretSo I definitely recommend again the whole testing schema in non-production etc#2021-09-2913:42daniel.spanielfor sure for sure .. we test on staging always ..#2021-09-2315:53daniel.spanielGee .. that looks pretty magical.. I will try it. And we use dev-local. And we use staging environment too. I just sometimes push things to staging that have bad schema. Sure, I could blow away staging, but sometimes its nice to not blow it away.
This trick I have not tried quite like this but thanks for the tip!#2021-09-2320:28Daniel JompheDatomic Cloud backup & restore with an external lib
With a small DB seed of a few thousand datoms, we were able to use @tony.kay's https://github.com/fulcrologic/datomic-cloud-backup lib to backup and restore our DB in different scenarios. 🎉{:tag :div, :attrs {:class "message-reaction", :title "datomic"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "datomic", :src "https://emoji.slack-edge.com/T03RZGPFR/datomic/ac50082444f328fc.png"}, :content nil})} " 3")}
#2021-09-2320:33Daniel JompheSolution
We spent more time operationalizing these ops than playing with Tony's lib.
We're now able to remotely connect to our DB in each environment, take or incrementally update a backup on our local computer's hard drive, and restore to a new DB in any environment by replaying the backed up transactions there.
We're also able to dynamically change an environment's connection from its original DB to its restored DB, without going through a full Ion deployment cycle.
Thus we can restore to a new DB in some system, and when the DB's ready, have the instance(s) switch their connection to the new DB and seamlessly continue.#2021-09-2320:37Daniel JompheAdvisability
With that said, I'm not sure it's the best way to proceed.
For example, if we switch DBs dynamically, we need to remember to git-commit our config changes so that the next Ion deployments won't switch us back to the previous DB.
Also, the CloudFormation template has a parameter to pre-heat a chosen DB, and we should remember to update this parameter after having switched.#2021-09-2320:38Daniel JomphePerformance
Measures of a restore of a few thousand datoms:
• ~3 seconds using dev-local.
• ~3 minutes using Client, remotely connected.
• ~? minutes using Client, Ion-locally connected: I'm yet to try this 1️⃣.
It surprised me that it took ~3 minutes to replay this small number of transactions and datoms. We're soon to be in production with our first client and we wonder how long it would take to restore bigger, more real amounts of txes and datoms.
Scalability
Of course, thanks to Tony's good work, we could try:
• 1️⃣ storing our backups in S3 and setting up a streaming restore server to continuously-incrementally prepare a restored & ready, unused DB. But that would imply many more efforts on our part.#2021-09-2320:50Daniel Jomphecc @U072WS7PE re: our (now archived) discussion about backups & restores for Cloud.#2021-09-2611:02Tobias SjögrenNewbie question: What “kind” of content can the Datom value have?
Beyond a simple string like “pizza”, can the value be an entity id?
In the product docs there’s also “:green” as the value, what is that? Sorry, I’m not even sure what the colon means…#2021-09-2612:29Joe LaneCheck out the schema page.
This links specifically to the valueType section. https://docs.datomic.com/cloud/schema/schema-reference.html#db-valuetype#2021-09-2612:38Tobias SjögrenOh, I don’t know why I couldn’t connect value and valueType.. - thanks!#2021-09-2618:12Tobias SjögrenThinking about the idea of immutable values makes me think that they are in fact identical to entities. If not, what is the fundamental difference?#2021-09-2618:29potetmEntities change over time. They can have attributes added or removed or changed.
Values are always just the value that they are.#2021-09-2618:30potetmIn datomic, an entity has a value at a particular time.#2021-09-2618:31potetmsee: https://www.dotkam.com/wp-content/uploads/2013/04/epochal-time-model.png#2021-09-2619:01Tobias SjögrenIf and when a value needs a have a value attached to it (“green” needs to be either “light” or “dark” e.g.), it should become an entity - so if you are unsure of whether it is a pure value or if it is to become an entity, you might just as well make it an entity right away - would you agree to that?#2021-09-2619:03potetmI would say it’s probably contextual.#2021-09-2619:05potetmLike, if you have a process where a light has a base color and a hue with different meanings, then yes.
For example, if you had a light where light green meant “good,” dark green meant, “good, but borderline,” and red meant, “bad.”#2021-09-2619:05potetmBut even in that case, you could easily say, “Those are just different states. They should be distinct: green dark-green red”#2021-09-2619:07potetmMore to your question, rather than your example, I think you’re on the right track. Any time you need multiple attributes to constitute a value, you should consider using an entity.#2021-09-2619:08potetmYou can also consider a tuple: https://blog.datomic.com/2019/06/tuples-and-database-predicates.html#2021-09-2619:16Tobias SjögrenThanks. I need to gain a better understanding of what a value actually is in comparison to an entity, an specifically what it means for an value to be immutable the way Hickey talks about it. Coming from a relational database system I’m used to change this field value to that new value. What it would mean to go immutable is, I guess, to have the values stable and just change the pointer from one value to another value. It might do good for my understanding of immutable values to have the relational model as the starting-point to convert from so to speak..#2021-09-2619:36potetmEntities are values over time.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-09-2619:36Tobias SjögrenWhat would be the benefit of defining “green” as a value compared to it as an entity?
The obvious reason, as far as I understand, of defining “green” as an entity is that it makes it non-redundant - if green is a value, then the “green” instances (connected to different entities) are copies of each other without any connection..#2021-09-2619:39potetm> it makes it non-redundant
This isn’t really true. Numbers, Keywords, Strings — normal values in clojure — are interned. So they share an instance.#2021-09-2619:39potetmSo if you’re worried about memory implications of using values, you shouldn’t be.#2021-09-2619:41potetmThe real question is: Does this thing change over time? If it changes, do I want all references to change as well?#2021-09-2619:43potetmI mean, it’s not really much different from using an id column vs a value column in SQL.#2021-09-2619:43potetmWhen do you use an id column?#2021-09-2619:47Tobias SjögrenNot thinking of memory implications..#2021-09-2619:47Tobias Sjögren“Id column”, you mean a foreign key value pointing to another table where the “value” is located?#2021-09-2619:49Tobias SjögrenThat could be the first adapting of a relational database to the Datomic way, gradually converting..#2021-09-2619:50Tobias Sjögren“Interned”, that’s a term that I’m not familiar with..#2021-09-2620:05potetmIntern a fancy word for caching. (It implies no eviction mechanism and automatic, language-level resolution to the cache value.){:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-09-2620:05potetmhttps://en.wikipedia.org/wiki/String_interning#2021-09-2620:06potetmBut yes, I meant an foreign key column vs a value column.#2021-09-2620:07potetmValue column:
name | address
"me" | 123 Foo Ave.
Foreign Key Column:
name | address
"me" | 948
#2021-09-2620:07potetmThe tradeoffs for one vs the other are roughly the same in datomic and SQL.#2021-09-2620:11Tobias SjögrenMeaning the first with the value column is equally redundant in both Datomic and SQL?#2021-09-2620:20Tobias SjögrenWhat I tend to think is that using a value column correlates to mutable values and the foreign key column to immutable ones..#2021-09-2620:28potetmWhat’s immutable about foreign key columns?#2021-09-2620:29potetmIs the string 123 Foo Ave mutable in SQL?#2021-09-2620:30potetmBoth value and foreign key columns can have changing values over time.#2021-09-2620:33Tobias SjögrenGiven that the value in the other table (that the foreign key is connected to) is stable/immutable, then the foreign key column option of yours above represents the immutability.#2021-09-2620:34Tobias SjögrenThe foreign key value can change but each value in the other table does not.#2021-09-2620:35Tobias SjögrenThat at least how I have been thinking about it while in the process of trying to learn more about things..#2021-09-2621:00potetm1. In SQL, the values in the foreign table can change.
2. The value in the local table can change too! e.g. In the above example, you can change 948 to 576 when the address changes.
There’s nothing totally immutable about any of this stuff!#2021-09-2621:02potetmBut when you execute a PostgreSQL query, you get a “snapshot” of the database—a point-in-time, stable, immutable value.#2021-09-2621:02potetmAll of these ideas apply equally to both datomic and SQL.#2021-09-2621:02Tobias SjögrenCertainly any value can change - I just used it as a model to better understand “immutability”.#2021-09-2621:03potetmSQL actually does a really good job of exemplifying many of the ideas behind Datomic.#2021-09-2621:03potetmThe major innovations of datomic really build on top of SQL, they do not replace them.#2021-09-2621:04potetmSo I would suggest solidifying some of these ideas just thinking about SQL.#2021-09-2621:10potetmJust to make it easier for you, some of the major innovations that SQL doesn’t have are:
1. History by default
2. Universally serialized transactions
3. Automatic indexing of every entity/attribute/value
4. Available transaction log with full history
5. A data-first query language
6. Automatic caching#2021-09-2621:10potetmBut, again, the things you’re talking about are definitely things you can reason about just with SQL.#2021-09-2619:41Joe Lane@tobejazz you might enjoy these talks.
https://youtu.be/-6BsiVyC1kM
https://youtu.be/EKdV1IgAaFc
https://youtu.be/9TYfcyvSpEQ{:tag :div, :attrs {:class "message-reaction", :title "100"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💯")} " 1")}
#2021-09-2619:45Tobias SjögrenI am in the process of watching them!#2021-09-2619:52Tobias SjögrenSo far, my feeling is that it, the term “value”, lacks a proper definition..#2021-09-2619:57Joe Lanehttps://docs.datomic.com/cloud/glossary.html#value#2021-09-2619:59Joe LaneFwiw, Datomic is a relational database.#2021-09-2620:02Tobias SjögrenCare to elaborate?#2021-09-2620:10Joe Lanehttps://docs.datomic.com/cloud/whatis/data-model.html#universal
All datoms are in the same relation in datomic#2021-09-2620:14Tobias SjögrenDatoms being located in a single “relation” making atomic a relational database?#2021-09-2620:17Tobias SjögrenBack to the value thing. A value being defined as “Something that does not change” seem to be a Hickey definition - but I might be wrong about that..#2021-09-2620:19Joe LaneWell you can safely assume that meaning when you are working with Datomic or in Clojure. #2021-09-2620:22Tobias SjögrenTo the degree it is a Clojure/Datomic specific definition, it would need further explanation - at least for me..#2021-09-2620:31Joe LaneLet me know if the three videos don't answer your questions.
Bonus video https://youtu.be/ScEPu1cs4l0
{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-09-2620:40Linus EricssonIn the datomic data model you have datoms, which are of the form [E A V Tx (added?)] where E is an entity id. It is a long (but could be thought of as a pointer into a datastructure containing all the database data). A is also a long and points to a a certain entity that describes the attribute. The type of V is always determined by the valueType of the attribute entity pointed out in A. The Tx (transaction) is also a long. and points to a transaction entity. Added? is a boolean describing if the datom was added or retracted. In the ordinary database view it always true (since retracted datoms are not visible anymore).
An entity in datomic is all datoms who share the same E.
A reference (that's the relational part) must have a Apointing out a schema entity whose valuetype is :db.type/ref and has a V pointing out an E.
One confusing thing is that :db.type/ref also points out an entity with has an attribute :db/ident.
The documentation of datomic differens from most other softwares because it is very terse. You have to read it carefully.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-09-2620:44Tobias SjögrenConcerning the documentation, the fact that it is split up into On-Prem and Cloud and redundant over the two is not optimal..#2021-09-2620:45Linus EricssonYeah, that is confusing sometimes. Are you using datomic On-Prem or Datomic Cloud? There are difference between these products so you should look into the right manual.#2021-09-2620:46Linus EricssonThe various datatypes and literal forms in clojure are described here: https://clojure.org/reference/reader#_reader_forms
(it's also very terse. A introduction could be https://www.braveclojure.com/)#2021-09-2620:47Tobias SjögrenI got my eyes on Datomic about a week ago, and I’m all in to learn about it to gradually going away from the relational database that I’m using at the moment - at least incorporating new ideas to begin with.. So no, I ’m not a user of any of them yet..#2021-09-2620:47Joe LaneHave you worked with Clojure before @tobejazz ? #2021-09-2620:50Tobias SjögrenI have not.#2021-09-2620:50Tobias SjögrenMy tool at the moment is FileMaker.#2021-09-2620:56Joe LaneThe philosophy of Clojure and Datomic are fairly intertwined. Learning the basics of Clojure may greatly aid you in understanding Datomic.
The last talk I linked to you (are we there yet) is where the “Epochal Time Model” is introduced (if I remember correctly). It harkens back to the philosopher Alfred North Whitehead and his ideas found in his writings on “Process and Reality”. #2021-09-2621:00Tobias SjögrenOh, so Clojure and Datomic are complected 😉
I will certainly check the videos out..{:tag :div, :attrs {:class "message-reaction", :title "rolling_on_the_floor_laughing"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 3")}
#2021-09-2707:05Jakub Holý (HolyJak)Reportedly, Datomic can be quite well also used from Kotlin. (I know it has a Java API, but having an API does not say much about the experience of using it.) (I think I have it from https://www.youtube.com/watch?v=hicQvxdKvnc)#2021-09-2711:20Tobias SjögrenI’m used to understand “value” as the content of a field in a database table, or a variable’s value - that is, something that can change. Now I’m told that a value is “something that does not change”. Although I’m fully into the supporting the notion of immutability - I’m not sure how to comprehend this re-definition of the term “value”. So far, the Hickey videos doesn’t really explain this.. P.S. If I buy into the new meaning, what should I call a field content?#2021-09-2711:38Lennart BuitI tend to think about values as elements of a domain. Say, 4 is an element of the domain of integers, or “aaa” is an element of the domain of strings.#2021-09-2711:40Lennart BuitSome people tend to call a mutable variable a ‘place’#2021-09-2711:41tvaughanI don't think you need a new term. I think adding the concept of time would be helpful though. At a certain point in time an attribute has a value, and at another point in time the attribute could have a different value, but the value at these two points in time cannot change.#2021-09-2711:56FredrikThe field content is a pointer to a value. The shift in thinking is to understand that while the field content can point to different values at different points in time, the value it points to (at any given time) never changes. This is what it means for values to be immutable.#2021-09-2711:57Leaf Garland> I’m used to understand “value” as the content of a field in a database table, or a variable’s value
I think you said it yourself quite well. The thing that changes is the field in a database table or a variable, not the values that are their content.#2021-09-2712:08FredrikAn example. Let's say you ask a database, or an object if you're doing OOP, for the value of a field F1 and it returns 2. Now you tell it to update the field to 3. What happened? Did the database somehow change the number 2 into a 3? Will every place in your code that has a "2" now get a "3" instead? Of course not. Numbers are immutable, they're immutable values. But let's say you ask for another field F2 and it retuns a string "abc". You tell it to append "d" to the end. What happened now? Did the value "abc" change into "abcd"? In some languages, that is exactly what happens. The value "abc" no longer exists, it got blown away by appending "d". The bad news is that if any part of your program that referenced the old value "abc" now sees "abcd" instead. This is the cause of a lot of headaches, which having immutable datastructures simply doesn't allow for happening.#2021-09-2712:14FredrikWe could say that at time T1, the field F2 referenced the value "abc", while at time T2, it referenced the value "abcd". The value "abc" never changed.#2021-09-2712:14Tobias SjögrenWhat you are saying is that numbers are immutable and strings mutable, right? This, to me, adds another layer to the whole thing.#2021-09-2712:18FredrikNo, I didn't say that, or make it clear enough. I said that in some languages, which includes Ruby and PHP, strings are mutable. This is bad, which is why Clojure, Java, Python and others made them immutable.#2021-09-2712:22FredrikNow Clojure takes everything a step further and makes any kind of value immutable, not just the basic data types like integers and strings.#2021-09-2712:25Tobias SjögrenWhich to me suggests that values are not automatically immutable, they are made, treated so, if one makes that decision. So to say that “We should use values!” doesn’t automatically imply that the values are immutable. I keep coming back to the definition of a value - is it immutable, which is how I understand Hickey, or can they be made immutable?#2021-09-2712:26Tobias SjögrenI maybe should point out here that to me this is not playing around with words - I feel it is at the core of trying to gain a better understanding of the whole thing..#2021-09-2712:26FredrikYes, having immutable values in Clojure was a design decision, maybe it's most important one. Their immutability comes from the way they are implemented.#2021-09-2712:29Tobias SjögrenIf Hickey were to say that (for example in the “Value of Values” video), it would certainly make more sense to me..#2021-09-2712:31FredrikI think of immutable values this way: If X is some immutable value, then I can observe X at any point in time and always see the same thing. Anyone else can also observe X and always see the same thing. If I give you a reference to X, I don't have to copy the value of X before doing so, in fear of you "changing X" in some way. In OOP, we must fear the latter all the time because if I give you X and you do X.set(field, value) , I can't rely on X being what I think it is.#2021-09-2712:32Tobias SjögrenWhich is the same thing that happens in a relational database, at least by default..#2021-09-2712:33Tobias Sjögren(when a field’s content is changed from one value to another)#2021-09-2712:34FredrikYes, exactly. In Datomic, you always run a query against a specific value of the database, giving the nice property that running the same query against the same database always gives the same result.#2021-09-2712:39Tobias SjögrenNow, a “value” becomes somewhat hard to differentiate from an entity I think. They are both stable “items”. One idea is that a value is an entity without attributes, which would mean that as soon as a value should have an attribute attached to it, it should become an entity, and vice versa (maybe never happens though) an entity that has no attribute will become a value. (trying ideas here..)#2021-09-2712:40FredrikI don't think that's the right picture. An entity E is a collection of datoms, which are records of the fact that at a certain point in time, an attribute A had a value V.#2021-09-2712:41FredrikNow it might happen that the value is a pointer to another entity. But that value (the pointer itself) is still immutable.#2021-09-2712:45Tobias SjögrenThen, could it be that the only difference between values and entities is that entities have attribute-value pairs to them?#2021-09-2712:46Tobias SjögrenFor example, is “green” a value or an entity? It depends, right? If you want to attach the values “light” or “dark” to it (green), then green should be an entity instead of a value.#2021-09-2712:49FredrikMaybe someone can answer this better at a more fundamental level, but I'll take a shot.
An entity is a collection of attribute-value pairs. A value is something measurable, a numerical quantity, a string etc., more precisely those describe by valueType in Datomic.#2021-09-2712:50Fredrik"green" (the literal string) is a value#2021-09-2712:51Fredrik"light green" (the concept, not the string) can definately be an entity. It can be an entity whose color attribute is "green", and whose shade attribute is "light" (I'm making these attribute names up)#2021-09-2712:53FredrikOr, the color attribute can be a ref to another entity, let's say C1. C1 could then have the attributes (I'll present it as a map attribute-name -> value)
{:name "green"
:rgb 0x00ff00}#2021-09-2712:57Tobias SjögrenOK, “green”, the literal string is a value and could be the name for an entity, right?#2021-09-2712:58FredrikYes! It could be the value of a name attribute.#2021-09-2713:01Tobias SjögrenMy initial impulse is to have the V position be an entity id all the time.#2021-09-2713:03Tobias SjögrenWhich e.g. points to the entity with the name “green”.#2021-09-2713:03FredrikUnless you need to know more facts about "the color green" (as an entity), and thus have it be an actual entity, there's no need to. Use literal values when you can.#2021-09-2713:04anders@U02G1DKNWKT at some point they must bottom out to an actual value. If you care about modeling the life cycle of green (e.g. it can "change", or more precicely have a different set of attribute/values at certain points in time) feel free to do so#2021-09-2713:05FredrikYes, it must all bottoms out in values eventually (unless you are doing something like only modelling the relationship between entities without knowing anything else about them)#2021-09-2713:05Tobias SjögrenYou don’t see an obvious disadvantage of doing so here?#2021-09-2713:05andersas previously said; entites are sets of attributes that evolve over time. Wether you model "green" is an entity or an value depends on what "green" actually is in nyour domain#2021-09-2713:07Tobias SjögrenThank you for the discussion - I’m slowly moving towards understanding (I hope)..#2021-09-2713:08FredrikGood luck, and keep asking if you have more questions. My encouragement is to use literal values as much as possible.#2021-09-2713:10Tobias SjögrenThat makes me curious to why you prefer literal values..#2021-09-2713:12andersthat is kinda like saying 'why do you prefer columns over tables' in sql#2021-09-2713:12FredrikBecause they are inherently simpler{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-09-2713:13Tobias SjögrenYes it is @U0ESP0TS8#2021-09-2713:13andersit depends on what your modeling requirements are#2021-09-2713:14vlaaadWasn't it "value of values" talk about these concepts?#2021-09-2713:14Tobias SjögrenKind of..#2021-09-2713:14vlaaadValues, references, identities..#2021-09-2713:15FredrikI think we've been trying to unpack a small part of it.#2021-09-2713:22Tobias SjögrenConcerning how to model “green” as a value or as an entity - I’m thinking that unless I make it an entity (the name of an entity), I will have the redundancy of many instances with the value of “green” all over the database instead of everyone pointing to one single centralized instance (the entity). I’m not sure if this applies to Datomic though..#2021-09-2713:24anders@U02G1DKNWKT it certainly does apply to Datomic as well#2021-09-2713:25Tobias SjögrenOK, so making it a question of wanting redundancy or not is valid then..#2021-09-2713:26FredrikWould you worry about the same if you have an orderQuantity field, with the numbers 1 and 2 recurring very often?#2021-09-2713:26Tobias SjögrenI actually am not quite sure about that - possibly..#2021-09-2713:27Tobias SjögrenThat might be extreme..#2021-09-2713:27andersThe flip side is; sometimes you want the "redundancy", as you want the separate entities to hold different values at different times#2021-09-2713:28Tobias SjögrenJust create a new entity for it?#2021-09-2713:28andersIf you're coming from a SQL background this dilemma doesn't change significantly with regards to datomic#2021-09-2713:28andersThis is a modeling exercise#2021-09-2713:28Tobias SjögrenGood to know!#2021-09-2713:28Tobias SjögrenYes.#2021-09-2713:28FredrikA big benefit of having immutable datastructures is that it always is safe for many objects to reference the same value. Since strings (like other values) are immutable, the JVM can optimize for this by only storing one copy of each string.#2021-09-2713:28Tobias Sjögren(actually FileMaker)#2021-09-2713:29Tobias SjögrenThe memory thing is not an issue to me at this point..#2021-09-2713:30andersIn datomic, entities "change" over time, meaning they can hold a different set of attribute/values over time.#2021-09-2713:30FredrikMy mistake, I'm not understanding what you mean with redundancy then?#2021-09-2713:30andersWith datomic, you can get a hold of the database at a given point in time.#2021-09-2713:30andersby doing so, you hold the database at that given poinnt in time as a value#2021-09-2713:31andersAs this value will never change. What folllows is you can also consideres a given entity of that database value as a value#2021-09-2713:31Tobias SjögrenBy redundancy I mean the copying of the hard coded value like “green” in many instances without them being automatically connected..#2021-09-2713:32FredrikAre you worried about equality semantics?#2021-09-2713:32andersThis is possible as Datomic accretes new facts, but does not forget old facts#2021-09-2713:32FredrikAs in, "do these have the same color"?#2021-09-2713:33Tobias SjögrenPossibly. If “green” is regarded as one single “thing”, it should be represented as one single thing (entity) in the database as well.#2021-09-2713:34FredrikYou are in some sense asking "what is a color?" You will have to design this based on the needs of your app or domain.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-09-2713:39FredrikFor a drawing app, for instance, it would definately make sense to give colors more consideration, and maybe model them as entities. User A has a :user/favorite-color referencing entity E1, where E1 has attributes
{:color/name "foo"
:color/hue ...
:color/saturation ...
:color/lightness ...}#2021-09-2713:41Tobias SjögrenNew example. A date. Is a date a value or an entity? For instance, in FileMaker (a relational database), if I want to get an answer to the question “What events are connected to 2021-09-27?” I better have a DATE table with one record having the “name” “2021-09-27” (the date). Every time I need to connect some entity to a date, instead of having the actual date value in the column, I use a foreign key value to connect to the DATE table and a specific record there. In this way I have a non-redundant system where a specific date is centralized into one single entity. Again, thinking about applying this way of thinking to Datomic as well..#2021-09-2713:45FredrikDatomic natively supports values of type java.util.Date , just making an attribute with value type of :db.type/instant .#2021-09-2713:46Tobias SjögrenThe cause of “converting” a date from being a value into being an entity is not to be able to attach any more attributes to it, but to centralize it.#2021-09-2713:47FredrikI'd say there's nothing more centralized than values of immutable types. They are centralized by the language itself.#2021-09-2713:50Tobias SjögrenI kind of get a sense of what that means..#2021-09-2713:52Tobias SjögrenWould you say modelling dates as entities has no benefits in Datomic?#2021-09-2713:52FredrikEvery reference to the number 1 in your code is a reference to the same thing. It's a reference to the same underlying sequence of bits. The JVM does not make a copy of 00000001 every place you need it. In Clojure the same thinking should be applied to any kind of value: maps, lists, vectors, strings, booleans etc.#2021-09-2713:53FredrikA part of data modeling will be to figure out what kind of things you should make entities, and what you keep as values.#2021-09-2713:54FredrikAnything you can give some kind of identity should be an entity. Something that can have value X for some attribute at some point in time, but later that value might change to Y.#2021-09-2713:55FredrikA date, for example, can never change. The date 2021-01-01 will never change into 2021-01-02, that's just nonsense! But today's date will advance over time.#2021-09-2713:56Tobias SjögrenAn identity should only be given to something that has the capacity to change do you mean?#2021-09-2713:59FredrikNo, I don't think so.#2021-09-2714:00Tobias SjögrenOK - are you saying that a date cannot have an identity?#2021-09-2714:01FredrikNo, I would definately say a date has an identity#2021-09-2714:01Tobias SjögrenWhich means that you tend to think of a date as an entity?#2021-09-2714:02FredrikAgain, this depends on your domain. Are you making an app to show what happened on a given date in history? Then a date like 2010-12-24 could be an entity with facts about it#2021-09-2714:02Tobias Sjögren“Anything you can give some kind of identity should be an entity.”#2021-09-2714:03Tobias Sjögrenok#2021-09-2714:04FredrikI should have said, "and for which the built-in literals don't suffice"#2021-09-2714:05Tobias SjögrenOn the top of my head it is hard to think of something that has no identity in general.. Havn’t thought about it too much about it so I maybe shouldn’t say that..#2021-09-2714:05FredrikAre you recording the time when a customer placed an order? That's most likely just a literal date. Using literals whenever you can gives you many benefits, for instance you can use any built-in function to compare or transform them.#2021-09-2714:06FredrikWhen I used the word "identity" above, it really meant what I said the following sentence: "Something that can have value X for some attribute at some point in time, but later that value might change to Y."#2021-09-2714:06Tobias SjögrenI imagine asking the question “What happened on date X?” would be easier to answer if each date were modelled as entities..#2021-09-2714:07FredrikYou can use < and > directly in a query in Datomic to compare dates#2021-09-2714:09Tobias SjögrenThen, identity has to do with the ability change (?) was my response to that.. Which seems odd to me..#2021-09-2714:11Tobias SjögrenAgain, Fredrik (and Anders and others) - great thing to have the opportunity to discuss here..#2021-09-2714:17FredrikSorry for the confusion, I said "identity" when I instead meant "something whose attributes can change". In the end I guess you can give anything an identity.#2021-09-2714:19potetm@U02G1DKNWKT You keep mixing up a few concepts 🙂#2021-09-2714:19potetmA value is a piece of immutable data. That’s it.#2021-09-2714:20potetmIt could be a string, a number, a date, a collection—a list, a set, a hashmap.#2021-09-2714:20Tobias SjögrenI’m listening!#2021-09-2714:21potetmAs long as it’s immutable, as long as it can be compared to other values, it is a value.#2021-09-2714:21potetmSo the example you keep returning to: "light-green" vs {:color "green", :tint "light"}#2021-09-2714:21Tobias SjögrenA value is a piece of immutable data as long as it is immutable?#2021-09-2714:22potetmBoth of those are values.#2021-09-2714:22potetmYes, as long as the thing you’re talking about is immutable and can be compared it is a value.#2021-09-2714:23potetmSo you can compare 2 hashmaps by, say, looking at the key-value pairs.#2021-09-2714:24potetmSo, again, just so we’re 100% clear: In the example of colors that you keep returning to, both of the options that you lay out are values.#2021-09-2714:25potetmEntities build on top of values. You make entities out of values.#2021-09-2714:25potetmAn entity is a series of values.#2021-09-2714:26potetm{:color "green"} -> {:color "blue"} -> {:color "red}#2021-09-2714:27potetmSo that^ is an entity that changes color three times.#2021-09-2714:27potetmYou can model this a few ways, but the easiest is to give each entity a unique ID — just like you do with a SQL row!#2021-09-2714:28potetmSQL rows are entities. Each row is a value that changes over time.#2021-09-2714:29potetmIf you give each entity an ID, then you can easily talk about changes over time for a given entity:
{:id 1 :color "green"} -> {:id 1 :color "blue"} -> {:id 1 :color "red"}#2021-09-2714:29potetmNow instead of inferring that we’re talking about the same entity over time, you know for sure that we are, because we use the same ID.#2021-09-2714:33potetmAnd that’s pretty much it: Values are immutable pieces of data. They can be solo things like strings, numbers, and dates. They can be collections of things like vectors, sets, and hashmaps.
Entities are values changing over time. You usually want to have an ID attached to an entity so that you can see that you’re talking about the same entity (e.g. user, document, account) even though their values change over time.#2021-09-2714:34potetmDoes that clarify anything?#2021-09-2714:36Tobias SjögrenI am for sure in the process of understanding..#2021-09-2714:56Tobias SjögrenPausing to digest..#2021-09-2716:04Tobias Sjögren@U07S8JGF7 How can values change when they are immutable?#2021-09-2716:05FredrikThe values themselves never change.#2021-09-2716:05potetmYou change from one value to another value.#2021-09-2716:06potetmBut it’s the same entity.#2021-09-2716:06potetmatom in clojure works like this. You can swap! in a new value to a memory location, but it’s the same memory location over time (i.e. the same entity over time).#2021-09-2716:07Tobias SjögrenSo to say that “Entities are values changing over time.” is a bit dangerous, right? (I get what the intent is though)#2021-09-2716:07potetmNo, I think it’s accurate, but perhaps easy to misconstrue.#2021-09-2716:08potetmMore precisely: entities are a series of values over time.#2021-09-2716:08Tobias SjögrenWhat is changing is the entity, not the values.#2021-09-2716:08potetmCorrect.#2021-09-2716:09Tobias SjögrenFor an entity you choose a set of immutable values and when you “change” the value you are in fact choosing another value.#2021-09-2716:09potetmcorrect#2021-09-2716:09Tobias SjögrenI can notice some progress here..{:tag :div, :attrs {:class "message-reaction", :title "bananadance"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "bananadance", :src "https://emoji.slack-edge.com/T03RZGPFR/bananadance/5394a2df1be70a15.gif"}, :content nil})} " 1")}
#2021-09-2716:09FredrikAre you btw familiar with how pointers work in C or C++?#2021-09-2716:11Tobias SjögrenNot really. What is kind of odd is that I have yet to learn my first programming language.. I have a sense of what pointers are though - I think of them as references or foreign keys..#2021-09-2716:12potetmProbably better to consider them at a different time then. 🙂 They’re kinda related, but not at all necessary to understand entities and values.#2021-09-2716:12Tobias Sjögrenright#2021-09-2716:19Tobias SjögrenAgain, whether “green” is the name of an entity, or a literal value - is a modeling decision, right?#2021-09-2716:19FredrikYes. If you want to record facts in your database about the color green, then make it an entity#2021-09-2716:23potetmTo disambiguate: name of an entity means {:name "green"} and literal value means "green"#2021-09-2716:23potetmAnd yeah, just a modeling decision. It should be made base on your needs.#2021-09-2716:25potetmThere are some questions that can help you make that decision, but it has nothing to do w/ entities and values. It has to do with, “How is this used? What kinds of flexibility do you want to prepare for?”#2021-09-2716:26Tobias SjögrenRight.#2021-09-2716:26Tobias SjögrenWould you say that “Entities are values that might change over time” is more accurate than “Entities are values changing over time” ? It is not mandatory that there is change, just a possibility#2021-09-2716:27FredrikYes, that's entirely possible#2021-09-2716:27potetmWithout delving into philosophy, yeah that sounds right to me 😄#2021-09-2716:28Tobias SjögrenI think much of what is talked about here actually has a philosophy aspect to it..;)#2021-09-2716:28FredrikI mean, the question of what an entity is opens up a large philosophical discussion, similar to the question of identity. And talking about how Clojure or Datomic deals with these things is important, because it helps understand their design and how they differ from others, but because generally words mean different things in different contexts it makes it hard.#2021-09-2716:29potetmTrue. My mind immediately went, “Well, all things change given a long enough timescale,” which is probably not helpful in this discussion.#2021-09-2716:32Tobias SjögrenMy feeling is that they should be addressed more in detail Fredrik.#2021-09-2716:33Tobias SjögrenFor me as a newcomer it would certainly help..#2021-09-2716:40FredrikIt might help to remember that the two issues we've been discussing are separate from another: Entities vs values, and immutable values.#2021-09-2716:42Tobias SjögrenWhich, among other things, brings up the question of the definition of values. Are there such a thing as immutable values and mutable values, or are values always immutable? (outside of Datomic/Clojure)#2021-09-2716:43FredrikIn Clojure? All the data structures are immutable by default: Numbers, vectors, lists, hash maps, sets etc. are all immutable.#2021-09-2716:43Tobias Sjögrenoutside of Clojure#2021-09-2716:44Tobias Sjögrenin the common understanding of what a value is in programming#2021-09-2716:47Tobias SjögrenComing back to the example of variables and their values (outside Clojure) - am I changing the value of the variable or am I choosing another stable (immutable) value as the content for my variable? For me, this “nuance” makes a huge difference..#2021-09-2716:49FredrikThis depends on the language and what type of value we're talking about. Strings in python are immutable, strings in Ruby are not. Furthermore, in both cases the "value of the variable" can have different interpretations, either meaning "the value of what it points to", or "the address in memory of the value it points to". Being immutable implies you can never change the former, only the latter.#2021-09-2717:02potetm@U02G1DKNWKT The definition of value that we’re talking about came from Rich.#2021-09-2717:02potetmEveryone else uses the term loosely or not at all.#2021-09-2717:03potetmVariables (e.g. var i = 0) in traditional programming languages are not values at all.#2021-09-2717:04potetmLike, Fredrik said, whether that variable points to a value depends on context.#2021-09-2713:08Tobias SjögrenDoes anyone have a sense of why the triple parts of the Datom are called Entity-Attribute-Value like in EAV and not Subject-Predicate-Object like in RDF (https://en.wikipedia.org/wiki/Resource_Description_Framework) ?#2021-09-2717:50Tobias SjögrenHas anyone here become acquainted with The Associative Model of Data ? (https://web.archive.org/web/20181219134621/http://sentences.com/docs/amd.pdf)
It is also based upon triples but uses Source-Verb-Target instead of Entity-Attribute-Value, which in itself is interesting as a comparison.#2021-09-2718:47Linus EricssonI guess Rich was aware of most of the common previous research before starting with Datomic. Obviously there are similarities between RDF and Datomic, but also differences, like the time/transaction component. I'm not familiar with what query language is commonly used with RDF but I guess it is not datalog. RDF does AFAIK not have the idea of reified transactions.
RDF does also not prescribe a certain data type or ordering for the tuples in the model, but seems to speak about them in more general, mathematical terms. Nothing wrong with that, but things like the transaction log and the entity view of the database is not very clearly outlined as a concept (at least not in the RDF spec).#2021-09-2820:06Tobias SjögrenFor me, the interesting comparison between Datomic and RDF is the triple one (Entity-Attribute-Value vs. Subject-Predicate-Object). Considering the full fact (datom) - when presented with the time aspect on top of the basic triple, it is hard to understand why anyone would want to omit time awareness of the facts..#2021-09-2820:07Tobias SjögrenFor instance, Subject and Object “feels” more like similar things than Entity and Value.#2021-09-2820:08Tobias SjögrenGoing from Predicate to Association/Relationship “feels” more near than Attribute to Association/Relationship…#2021-09-2820:11Tobias SjögrenOne thing I’m aiming for here in this reasoning is that Attribute could/should just as well be seen as an Association/Relationship - as a Value Type.#2021-09-2718:11jaretHowdy all! We have released dev-local 1.0.238 with today's release of Dev-tools 0.9.64. https://forum.datomic.com/t/cognitect-dev-tools-version-0-9-64-now-available/1957{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-09-2719:51azHi, any thoughts on multi-tenancy with datomic? I’ve been searching through the discussions and it seems like there have been changes with cloud that make multiple dbs ok. Any tips would be great. Thanks#2021-09-2814:58jaretThe quick and dirty: Multi-tenancy in on-prem is a no-go. There is not an enforced limit on DBs in on-prem but there are operational considerations making it a poor fit. Chiefly because the transactor was designed to serve a single primary DB (some small secondary DBs are OK for operations type tasks), but the transactor has to hold in memory the sum of each DB's memory index. With large enough DBs this becomes a resource problem. Furthermore there are no per DB stats in Datomic on prem, all DBs compete for space in the object cache, queries and transactions compete for CPUs and garbage collection pauses have impact across all DBs. You can certainly run multiple DBs, but I recommend that any mission-critical DB have their own dedicated transactor and peer processes.
Multi-tenancy in Cloud is fully supported and you can have 100s to thousands of separate DBs on a Datomic cloud system. There are still operational impacts to having so many DBs but you can scale compute nodes to optimize performance, utilize query groups to offload read per DB and have the ability to scale. If you are planning on going this route, I'd love to have a call with you to discuss your specific needs. I can bring along another member of the Datomic team and we can make sure we understand your specific use-case.#2021-09-2815:00jaretIf that is something that interests you, let me know and we can arrange a meeting to discuss. Or if you prefer to work async you can write in your questions to <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>. Cheers!#2021-09-2819:59Jakub Holý (HolyJak)Great to know! I planned to use datomic on-prem with multi tenancy 😅 Perhaps good we settled on psql so I didn't run into this.
It would be nice if the docs included this (or do they?)#2021-09-2913:37jaretYeah this is covered to some extent in on-prem docs here: https://docs.datomic.com/on-prem/operation/capacity.html#multiple-databases{:tag :div, :attrs {:class "message-reaction", :title "pray"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙏")} " 1")}
#2021-09-2916:32az@U1QJACBUM thanks so much for the reply. That’s great news. If we need to scale to say 20 or so tenants with a use case of an inventory system for small restaurants (to give a sense of scale) would we need to do anything manually on datomic cloud? Or would that load likely be handled out of the box? Once we get to that level we would have better resources to then start tuning however necessary#2021-09-2916:36jaret@U0AJQJCQ1 Yeah it should absolutely handle that kind of scale easily. And scaling cloud is as easy is adding compute node resources or moving up instance sizes. Caveat: I am imaging the total datoms throughput/ total size being small for these restaurants. I am happy to chat about specifics.#2021-09-2808:11hdenHi, I’m seeing the following errors, in a Datomic Cloud cluster. any idea what went wrong?
Unable to execute HTTP request: Read timed out while invoking (datomic.client.api/list_databases client)#2021-09-2808:31Linus Ericssona network read timeout could either imply the port we try to reach is not open (the application is not running) or that there is something not allowing the application to contact the ip (like not correctly configured VPC:s)#2021-09-2808:36hdenYeah… it’s a single instance within a cluster of a Datomic Query Group, so I don’t think it’s a mis-configured VPC (otherwise we should be seeing the same error from all the instances)#2021-09-2808:36hdenMaybe it’s a networking issue?#2021-09-2810:26Ivar RefsdalAnyone have datomic transactor on-prem logging til fluent? We are using version 1.0.6344.
I would try fluency-core and such, but I see there are some conflicts. For example:
fluency-core 2.6.0 brings com.fasterxml.jackson.core/jackson-databind 2.10.5.1 whereas Datomic 6344 includes jackson-databind-2.12.3.jar. It could potentially "just work" of course, but I am curious if anyone has already made this setup.
Edit: Thanks!#2021-09-2911:41furkan3ayraktarHi, is it possible to attach more than one NodePolicyArn for the Datomic Cloud instances? We hit the Amazon’s https://aws.amazon.com/premiumsupport/knowledge-center/iam-increase-policy-size/ (6144 chars) in our current policy and I was looking for a workaround. I would like to hear if anyone hit the same problem before and overcome it.{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 1")}
#2021-09-2912:35Tobias SjögrenAnyone knows if the idea of “nested Datoms” has been discussed anywhere?#2021-09-2912:48pyryWhat do you mean by that?#2021-09-2912:52emccueI think conceptually thats handled by the same structure as transactions#2021-09-2912:52emccuefor entity E1 you learn fact F1 has a value of V1#2021-09-2912:52emccuefor entity E1 you learn fact F2 has a value of V2#2021-09-2912:53emccueand you learn that in transaction T1#2021-09-2912:54Tobias SjögrenBy “nested Datom” I mean that one datom can be the entity (“E” position) of another datom.#2021-09-2912:58favilaWhat use case are you thinking of? As Ethan said, transactions are meant to cover most of these. Datomic doesn’t support arbitrary reification for any entity value (i.e. like RDF does), but transactions are a reification mechanism over groups of datoms (the transaction data)#2021-09-2913:07Tobias SjögrenFor example, to represent this information: “Rich is employed by Cognitect as the chief architect.”#2021-09-2913:08Tobias Sjögren(trying ideas..)#2021-09-2913:17favilaI’m not sure how that would apply?#2021-09-2913:30Tobias SjögrenI guess a Datom essentially can connect two “things” (Entity and Value). When you need to connect more than two things - as in my example: “Rich”, “Cognitect” and ” Chief Architect” are three “things” - it seems to make sense to use nesting.#2021-09-2913:37Tobias SjögrenThis nesting is something that the “Associative Model of Data” supports (see my above thread).
Also, it seems it is a proposed addition to the RDF standard called “RDF-star”.#2021-09-2913:43emccueI think what you want isn't "nested datoms" - datoms can represent this relationship#2021-09-2913:43emccuextdb afaik decomposes documents with nested structures into datoms internally#2021-09-2913:44emccueit might be just a way to turn
{:thing {:wow 1}}#2021-09-2913:44emccueinto a set of datoms which represent the nesting#2021-09-2913:44favilaWhat is the advantage over either an entity to represent the relationship or a compound value with two references?#2021-09-2913:45favilaIe the usual ERD modeling#2021-09-2913:49Tobias SjögrenI’m in the process of trying to find out. How would you express “Rich is employed by Cognitect as the Chief Architect.” as Datoms?#2021-09-2913:56favilaAssuming the point is multiple employers at once: [rich :employment e][e :employer cognitect][e :role chief-architect]. Or even reverse direction of :employment#2021-09-2913:56favilaSame thing I would do in a relational db or a doc db#2021-09-2914:06Tobias SjögrenWhich datom states that Rich is the Chief Architect?#2021-09-2914:07emccue[e :role chief-architect], where e is the identity of the employment#2021-09-2914:22Tobias SjögrenI guess using nested datoms, it could be like this (where “xxxx” in the second datom represents the first datom):
[rich :employment cognitect]
[xxxx :role chief-architect]
#2021-09-2914:45favilaIt’s unclear to me that this is what it would mean. If this were reification, xxx would mean the fact itself is a chief architect, not rich-at-cognitect#2021-09-2915:58Tobias SjögrenIt would be interesting to hear what you think of this: https://enterprise-knowledge.com/rdf-what-is-it-and-why-do-i-need-it/#2021-09-2916:22favilaThis looks like just another way to encode the same thing, and the predicate now has to carry the nuance of what part of the triple is its true object (i.e. is it “adverbial” modifying the fact or is a “meta statement” about the statement itself) . Maybe it’s more compelling in an open system like RDF where you can’t control how people encode things, so you may get stuck needing to annotate a fact instead of an entity. In a closed schema I think I’d rather just not deal with this nuance.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-09-2916:25favilaMany systems (even in RDF) do add a special-purpose handle (often an extra component to the triple, like ?tx in datomic) to make “higher order” facts expressible. This is a generalization of that, so maybe it will be fine.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-09-2916:34Tobias SjögrenOne more example (in you don’t mind) - how would you express this sentence in datoms?: “Flight BA1234 arrived at Heathrow Airport on 12-Aug-1998 at 10:25am.”#2021-09-2921:51favilaIt would depend on what this was for, but a first cut would be three datoms joined by an entity that represents an arrival. In pseudocode, [e :arrival/flight BA1234][e :arrival/airport heathrow][e :arrival/time 12-Aug-1998 10:25am]{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-09-3008:31Tobias SjögrenThat makes sense and really helps me in the quest of trying to understand!
In “The Associative Model of Data” (https://web.archive.org/web/20181219134621/http://sentences.com/docs/amd.pdf) - which seem to a precursor to Datomic that like RDF-star can nest facts - it would be represented like this:#2021-09-3013:46favilaI’m not sure how related they are. Datomic is inspired by rdf’s basic idea of a “triple” expressing a fact, but I think the influence kind of ends there. I think the datom is in the service of finer-grained truth model, and what you call the “nesting” (the transaction-entity annotation) is in service of the epochal time model, not supplying a new way to express domain-model concepts (in fact, over-using datomic datom history for domain-accessible features is a common datomic pitfall https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html). In other respects it’s pretty traditional. The world is still closed, multi-valued relationships are still reified, normal entity-relationship modeling process pretty much still applies.#2021-09-3014:44Tobias SjögrenThe “Associative Model of Data” (where my screenshot is from) even had immutability 10 years before Datomic - the two seems very similar to me.
Would you say the “nesting” wouldn’t help you build Datoms?
I don’t quite understand the connection between the transaction and the so called nesting…
Thanks for the blog link - are there any more Datomic blogs you would recommend?#2021-09-3016:51favila> I don’t quite understand the connection between the transaction and the so called nesting…
A datom is [E A V TX OP], where TX is the transaction entity the fact is from, and OP is whether this is an addition or retraction of the fact in that TX. So you can treat a datom as a thing “nested” in a transaction because you can join the TX entity to the whole fact. Conceptually [TX :db/add [E A V]], [TX :db/retract [E A V2]]. vs [E A V TX :db/add][E A V2 TX :db/retract] It’s just encoded into the fact itself instead of being a general mechanism.#2021-10-0109:31Tobias SjögrenWhich is you can write the datom code as fact additions/retractions nested inside the transaction datom(s)? The effect is still that one or more datoms are connected through a single transaction datom, right?#2021-10-0111:06favilaThe datoms are objects not subjects of facts about the transaction. So they’re connected via a transaction entity not transaction datom.#2021-10-0111:15Tobias SjögrenRight.#2021-09-2915:15jaretHowdy everyone! Got a slew of announcements for Cloud and Ions:
https://forum.datomic.com/t/ion-dev-1-0-294-and-ion-1-0-57/1965#2021-09-2915:15jarethttps://forum.datomic.com/t/new-client-cloud-1-0-117-release/1964#2021-09-2915:16jarethttps://forum.datomic.com/t/datomic-cloud-936-9118/1966#2021-09-2915:41kennyWhat does “self-unification within a single clause” mean?#2021-09-2923:23jaretHi @U083D6HK9! We had a report from a customer that in a very specific use case where they had a self-referencing entity (`:foo/ref`) that the single clause [?e :foo/ref ?e] would return all entities instead of a subset of the entities which matched the clause. This was undefined behavior and so we decided to add a feature that would unify on a single clause. Here is a gist of the previous behavior:
(d/q '[:find (count ?e)
:in $
:where
[?e :some/attr ?e]]
(d/db conn))
;=> [[3]] ;; returns a count of entities where the left ?e is subset of the right ?e (note there are 4 total entities and 1 self referential)#2021-09-2923:24jaretNow this same query should return the single self referential entity and unify on ?e.#2021-09-2923:26kennyThank you for the thorough explanation. Makes total sense. Interesting use case. #2021-09-2923:27jaretIt's definitely an edge case. Adding a second clause for instance gets the desired behaivor... i.e.
(d/q '[:find (count ?e1)
:in $
:where
[?e1 :some/attr ?e2]
[(= ?e1 ?e2)]]
(d/db conn)){:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 1")}
#2021-09-3013:36octahedrionI'm trying to use analytics with dev-local and I think I have my classpath set correctly and all the relevant .properties files etc correct, but I'm getting Query 20210930_132956_00043_98grr failed: Unable to load client, make sure com.datomic/dev-local-bundle is on your classpath when I try to execute an SQL query in the trino client, why ?{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 1")}
#2021-09-3020:55jjttjjIt's mentioned (https://docs.datomic.com/cloud/whatis/architecture.html) that datomic uses dynamodb for the transaction log. I'm just learning dynamo for unrelated things and I'm curious what the table structure datomic uses looks like. Is it just something like e + a for partition key and t for sort key?#2021-09-3021:05favilano, it’s just binary or edn blobs. DynamoDB (all storages really) are just dumb key-value stores. Only a handful of keys are mutated and need strong consistency guarantees, and dynamo conditional writes are used for those#2021-09-3021:11jjttjjMakes sense, thanks#2021-10-0110:19pedrorgirardiI’m upgrading Datomic Cloud for the first time, and I managed to ‘break’ it. :man-facepalming: I selected the storage nested stack, and followed the steps, but I’m getting:
> The following resource(s) failed to update: [EnsureAdminPolicyLogGroup].
What does that mean exactly?#2021-10-0112:49hdenI think your data are fine.
Try submit a support ticket.
https://support.cognitect.com/hc/en-us/requests/new#2021-10-0113:25jaret@U5GP9FMC0 I think its really important to note that you didn't "break it", what is happening here is the update CFT operation is failing, because a resource cannot be deleted by Datomic. It will roll back to the previous version. Any resource that has been modified will not be deleted by Datomic.#2021-10-0113:26jaretand as @U0HLHE6JE says your data is totally safe and your system is still operational. I would be happy to walk you through upgrading or look at the specifics of your situation. If you do log a ticket please send me what steps you were following and what version you are upgrading from and to.#2021-10-0113:31pedrorgirardiThank you @U0HLHE6JE @U1QJACBUM.
My wording was misleading @U1QJACBUM; the system is fine, and I didn’t loose anything.
I will create a ticket and let you know. Thanks in advance.#2021-10-0201:22pedrorgirardi@U1QJACBUM I created the ticket https://support.cognitect.com/hc/requests/3311#2021-10-0110:32ChrisCan anyone recommend some open source codebases that use Datomic On-Prem? My team have been having discussions about different practices and what advice does and doesn't transfer from traditional DBs, and it would be nice to have some view of how other people approach it.#2021-10-0112:31TwanWe try to upgrade Datomic Client (on-prem) from 1.0.6202 to 1.0.6344. After upgrade, we are not able to perform queries on :db-after (as a result of a transaction) any more. We get errors either saying (`d/q`) Query db arg does not match connection or (`d/pull` )`db not found` . Do you have any clue?#2021-10-0112:47TwanWe downgraded to 1.0.6269 which resolves the issue, so something between 1.0.6269 and 1.0.6344 is likely the cause#2021-10-0113:29jaretHi @U9M6WJ9PV what version of client-pro are you using? Did you update your peer-server since you are using a peer-server to utilize client?#2021-10-0113:36TwanHi @U1QJACBUM! Our client-pro is on 1.0.72 The peer server was also on 1.0.6344 (and 1.0.6269 respectively)#2021-10-0113:37jaretCan you share a full gist or snippet. I'd like to immediately try to re-create this 🙂#2021-10-0113:37jaretA repl history would be enough for me to see if you are doing anything I am not doing in re-creating.#2021-10-0113:38Twan(def res (d/transact conn {:tx-data [{:db/id 17592186190002 :nedap.source.people.person.wm/first-name "Some Name"}]}))
(d/q '{:find [(pull ?e [*])] :where [[?e :nedap.source.bournedrasil.wm/key "something"]]} (:db-after res))#2021-10-0113:39TwanI hope you get the gist of it#2021-10-0114:29Joe Lane@U9M6WJ9PV What is the actual value of res? is it a map with the :db-after key or is it an anomaly?#2021-10-0114:32Lennart Buit(I’m from the same company) Its just a successful transaction result, so with :db-after/`:db-before`/`:tempids`/`:t`. What we do notice btw is that the :database-id between (d/db conn) and (:db-after res) differs. The former being db-name + hex string (uuid?), the latter being a jdbc url.#2021-10-0114:36Joe LaneAre y'all using on-prem with sql storage (which one?) and issuing these operations against a peer server via the client API?#2021-10-0114:37Lennart BuitYes, we use postgres, and we are issuing these operations against a peer with the client API{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-10-0114:42Joe LaneIs it easy for your team to try with 1.0.6316?#2021-10-0114:47TwanWe already did. I copied the wrong version in my first thread message#2021-10-0114:47TwanIn that version everything was fine#2021-10-0114:51Lennart BuitWhat I noted about :database-id being different between (d/db conn) and (:db-after res), that is not the case on 1.0.6316 and they both contain jdbc urls#2021-10-0114:53Joe LaneCan you now try using the bin/repl command and the peer API to see if you get the same behavior?#2021-10-0114:57Lennart BuitOn 1.0.6344, or on 1.0.6316 ?#2021-10-0115:03Lennart BuitWould you mind if we park this discussion until monday ^^. Its 5:02PM here, Friday afternoon, so not the best moment to start fiddling with the database 🙂. Have a good weekend!#2021-10-0115:04jaret@UDF11HLKC @U9M6WJ9PV we have reproduced this issue. No further action needed on your end.{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 3")}
#2021-10-0115:04jaretI am discussing with the team now and will update you.{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-10-0117:20jaretAs an update we isolated the problem and will work on a fix for an upcoming release. For now the work around is to downgrade ONLY the peer server process to 1.0.6316.{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2021-10-0117:22jaretThank you both for reporting this!#2021-10-0118:19TwanGood to know, thank you for your follow-up#2021-10-0119:25Lennart BuitThanks for the openness & follow up :)!#2021-10-0115:24Tatiana KondratevichHello, at https://docs.datomic.com/cloud/ions/ions-reference.html#web-ion, I noticed the request gets :uri. How do I get it right inside my http function?
I am currently using metosin/reitit to create routing, can anyone have experience with using this inside datomics?#2021-10-0122:05kennyGood afternoon. I'm executing an Ion deploy that is failing at the DownloadBundle stage. In the event log, the following message is present:
Cannot allocate memory - rm -rf /opt/codedeploy-agent/deployment-root/16ccad37-2dcd-49b4-81d5-65b944bab806/d-603RLTGJC 2>&1
I have not hit this error before, so I am curious what the best way to resolve it is. I can, of course, provide more info if relevant.#2021-10-0122:14kennyfyi, terminating the instance and having the asg start a new one fixed it.#2021-10-0214:44prncI’ve seen this before, w/ the same workaround as yours, not sure what a “proper fix” is#2021-10-0215:31kennyWhy does the qhttps://docs.datomic.com/cloud/query/query-data-reference.html#q only support the variable-arity variant of q?#2021-10-0410:43Ivar RefsdalShould alt-host in transactor.properties also be identical in a HA setting?#2021-10-0410:45Ivar RefsdalI'm guessing not..#2021-10-0410:53Ivar RefsdalTo enable HA, simply run two transactor processes on separate machines, with identical transactor properties files (in particular, configured to use the same storage.)
Quote from https://docs.datomic.com/on-prem/operation/ha.html#2021-10-0413:46Linus EricssonThe host and alt-host should point to the IP of that currently running transactor. Usually it is some public IP and an internal IP in them. Two HA transactors should have different IP:s.
The IP:s are written to the back storage by the transactor, and the connected peers deduce which transactor that is the active one from some protocol based on the transactors writing heartbeats to the back storage. And uses the IP:s from the backstore to connect to the active transactor.
The backstore serves as a "transactor service discovery" as well as some kind of liveness check which implicitly decides which transactor that is really alive.#2021-10-0413:47Linus Ericsson(by "HA transactor" I mean two or more transactors running against the same backstorage, which gives high availiability to the transactor service).#2021-10-0415:17Ivar RefsdalOK, thanks! As far as I understand then the documentation regarding identical transactor properties files is then incorrect. I think that this should be corrected.#2021-10-0419:04Ivar RefsdalI am encountering a strange OutOfMemoryError for a relatively simple query.
I don't think it creates a cross product.
Any suggestions on why this would happen?
https://gist.github.com/ivarref/7059a0fe79b3353187dad9e187928da0.#2021-10-0419:11Joe LaneHi @UGJE0MM0W , your query clauses can be thought of as executing these steps:
1. Find all transactions in the entire system which have the attribute :tx/user-id, and load them all into memory.
2. Of all of the txes with the attribute :tx/user-id, grab the :db/txInstant from each transaction.
3. Filter the transactions realized in step 1 and 2 by whether they started after today in Oslo
4. Filter the remaining transactions from Step 3 by whether they happen before tomorrow.
5. Count the remaining txes
I suspect your query is failing on Step 1 because it is querying all txes in the database. (and if it doesn't fail yet, it will get slower until it fails as more data is added to the system).{:tag :div, :attrs {:class "message-reaction", :title "orange_heart"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🧡")} " 1")}
#2021-10-0419:14Joe LaneCheck out the https://docs.datomic.com/on-prem/api/log.html#log-in-query example on the https://docs.datomic.com/on-prem/api/log.html page, hope that helps!#2021-10-0419:29Ivar RefsdalThanks a lot. I will read more thorough tomorrow.#2021-10-0508:05Ivar RefsdalI noticed that if I change my query from:
[[?tx :tx/user-id _]
[?tx :db/txInstant ?inst]
...
to
[[?tx :db/txInstant ?inst]
[?tx :tx/user-id _]
...
it does not OOM. Why would that be?
For the record there is about 12M datoms/transactions with
the :tx/user-id attribute. I figured this would fit easily in a ~4G heap?
What exactly is pulled into memory in step 1?
Only the datom matching ?tx :tx/user-id _
or the whole entity, i.e. pull [:*] ?tx?
My guess is that the OOM in the first query is somehow related to the fact
that :tx/user-id is not indexed, but :db/txInstant is. Is that a correct
understanding?
Note: I am mostly interested in understanding why the OOM happens, not solving
the problem per se.
Thanks again!#2021-10-0501:09jdkealyHi I'm getting an error when i try to connect to a new Transactor I set up using Dynamo DB
I have [com.amazonaws/aws-java-sdk-dynamodb "1.11.600"] In my dependencies
I set up the transactor using cloudformation and it created the dynamo table for me. d/connect throws
Execution error (ClassNotFoundException) at java.net.URLClassLoader/findClass (URLClassLoader.java:382).
com.amazonaws.http.TlsKeyManagersProvider
Calling (d/create-database uri) returns
Execution error (NoClassDefFoundError) at java.lang.Class/forName0 (Class.java:-2).
Could not initialize class datomic.ddb_cluster__init
#2021-10-0502:52Joe LaneHey @U1DBQAAMB , that TLS key manager exception is due to aws sdk version mismatch in your peer. Remove (or change your ddb dep and things should start working. #2021-10-0513:12jdkealyRemoving the dependency throws the same issue for create-database, but for connect, it throws
Execution error (ClassNotFoundException) at java.net.URLClassLoader/findClass (URLClassLoader.java:382).
com.amazonaws.services.dynamodbv2.model.PutItemRequest
#2021-10-0513:14jdkealyI'm not sure what version to change it to.
I'm using datomic-pro 1.0.6344#2021-10-0513:15jdkealyI also commented out all the other AWS sdk dependencies and namespaces. @U0CJ19XAM#2021-10-0513:16Joe LaneOk. Why don't you open a support ticket and we will get it sorted out.#2021-10-0511:30hdenAnyone experienced connectivity issue with analytics preview?
Symptoms
• Datomic Cloud https://docs.datomic.com/cloud/changes.html#781-9041
• Presto CLI v348
• It’s not a security group issue, since I can connect to the database via datomic client access command just fine.
• Restarting the access gateway doesn’t resolve the issue (`datomic gateway restart`)
; presto --server localhost:8989 --debug
presto> SHOW SCHEMAS FROM system;
Error running command: java.net.SocketException: Connection reset
java.io.UncheckedIOException: java.net.SocketException: Connection reset
at io.prestosql.client.JsonResponse.execute(JsonResponse.java:154)
at io.prestosql.client.StatementClientV1.<init>(StatementClientV1.java:135)
Submitted a support ticket: 3316#2021-10-0513:34hdenFixed the issue by restarting the Bastion by EC2 instance refresh.#2021-10-0512:10vlaaadWhy I cannot do this?
(db/q
'[:find ?c
:in $db0 $db1 ?id
:where (or [$db0 ?c :concept/id ?id]
[$db1 ?c :concept/id ?id])]
db0 db1 "3tq2_f1E_Lyh")
=> Execution error (ExceptionInfo) at datomic.client.api.async/ares (async.clj:58).
Nil or missing data source. Did you forget to pass a database argument?#2021-10-0512:52favilaYou can’t do this because rules (`or` is syntax sugar for a named rule) accept only one data source, implicitly $, explicitly ($ds or…)#2021-10-0512:53vlaaadah, makes sense. thanks#2021-10-0512:53favilaThat’s not a “deep” reason why, but it’s the proximate reason for your exception#2021-10-0512:12vlaaadside note: if only I could supply coll of databases, e.g. :in [$ ...] …{:tag :div, :attrs {:class "message-reaction", :title "100"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💯")} " 1")}
#2021-10-0512:43Linus EricssonIf you get the eid of ?c you will also need to know which db it exists in.
I think you should use ordinary clojure functions for doing a db-nil-safe lookup with (d/entid ) or similar, working of the collection of dbs.
When you get the correct db, you can do further questions with d/q queries.#2021-10-0512:53vlaaadIt’s a cloud, so I try to build a query that loads all I need instead of calling it multiple times. N+1..#2021-10-0512:55vlaaadanyway, I found what I wanted using multiple (q) s inside a query and then doing or on the results#2021-10-0512:59Linus Ericssonok, don't know enough about cloud obv.#2021-10-0513:05jarrodctaylorTypically you shouldn’t need to have excessive apprehension about making multiple queries https://docs.datomic.com/cloud/whatis/architecture.html#large-data-sets#2021-10-0513:09vlaaadmy concern is not large data sets but network latency when running many queries#2021-10-0513:16jarrodctaylorOften times that does not need to be a major concern. From the link “Datomic is designed for use with data sets much larger than can fit in memory, while providing in-memory performance for query”#2021-10-0514:11Joe LaneI think Vlad is using cloud via a client application, not from within an ion, that is why he is concerned about network requests.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-10-0514:14jarrodctaylorDetails matter 🙂#2021-10-0514:02Benjaminhttps://docs.datomic.com/on-prem/overview/storage.html#provisioning-dynamo I'm trying to follow the guide for automatic dynamo transactor setup
bin/datomic ensure-transactor <file> <file>
yields
com.amazonaws.services.identitymanagement.model.AmazonIdentityManagementException: Must specify userName when calling with non-User credentials (Service: AmazonIdentityManagement; Status Code: 400; Error Code: ValidationError; Request ID: b64c5009-5899-4dec-9c14-34d9f78b0e89; Proxy: null)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1819)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1403)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1372)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1145)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530)
at com.amazonaws.services.identitymanagement.AmazonIdentityManagementClient.doInvoke(AmazonIdentityManagementClient.java:11007)
at com.amazonaws.services.identitymanagement.AmazonIdentityManagementClient.invoke(AmazonIdentityManagementClient.java:10974)
at com.amazonaws.services.identitymanagement.AmazonIdentityManagementClient.invoke(AmazonIdentityManagementClient.java:10963)
at com.amazonaws.services.identitymanagement.AmazonIdentityManagementClient.executeGetUser(AmazonIdentityManagementClient.java:6128)
at com.amazonaws.services.identitymanagement.AmazonIdentityManagementClient.getUser(AmazonIdentityManagementClient.java:6098)
at datomic.iam$get_user.invokeStatic(iam.clj:66)
at datomic.iam$get_user.invoke(iam.clj:66)
at datomic.iam$get_account_id.invokeStatic(iam.clj:109)
at datomic.iam$get_account_id.invoke(iam.clj:107)
at datomic.provisioning.aws$fn__31025.invokeStatic(aws.clj:493)
at datomic.provisioning.aws$fn__31025.invoke(aws.clj:491)
at clojure.lang.MultiFn.invoke(MultiFn.java:229)
at datomic.provisioning.aws$ensure_transactor.invokeStatic(aws.clj:665)
at datomic.provisioning.aws$ensure_transactor.invoke(aws.clj:659)
at clojure.lang.AFn.applyToHelper(AFn.java:154)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.lang.Var.applyTo(Var.java:705)
at clojure.core$apply.invokeStatic(core.clj:665)
at clojure.core$apply.invoke(core.clj:660)
at datomic.require$require_and_run.invokeStatic(require.clj:22)
at datomic.require$require_and_run.doInvoke(require.clj:17)
at clojure.lang.RestFn.invoke(RestFn.java:423)
at datomic$_main$fn__163.invoke(datomic.clj:150)
at datomic$_main.invokeStatic(datomic.clj:149)
at datomic$_main.doInvoke(datomic.clj:142)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.lang.Var.applyTo(Var.java:705)
at clojure.core$apply.invokeStatic(core.clj:665)
at clojure.main$main_opt.invokeStatic(main.clj:514)
at clojure.main$main_opt.invoke(main.clj:510)
at clojure.main$main.invokeStatic(main.clj:664)
at clojure.main$main.doInvoke(main.clj:616)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.lang.Var.applyTo(Var.java:705)
at clojure.main.main(main.java:40)
#2021-10-0516:35jaret@benjamin.schwerdtner That error would be thrown if you are missing credentials for an admin account required to create all the DDB tables, IAM roles, S3 Buckets, permissions etc.#2021-10-0516:35jaretCan you try with admin credentials sourced?#2021-10-0516:36jaretAre you trying to use temporary or assume role credentials? I use static admin credentials when using our ensure etc#2021-10-0516:57Benjamin@jaret ah that sounds promising I try. It wasn't with admin creds#2021-10-0517:00Benjamin@jaret I'm copying those credentials and it also does something (if I don't do it it throws another err about token not set or sth). --> but still have the same err with username#2021-10-0611:09Benjaminfixed. I figured out I needed an IAM role instead of those account credentials. Not at all obvious to a aws newbee#2021-10-0518:44eggsyntaxOn-Prem (Pro) pricing question: if I want to use Datomic for a single app, but for compliance reasons I need separate instances of the app/db/transactor in different AWS regions, does that count as a single Datomic license or as one per region? Couldn't find an answer in the docs or on http://ask.datomic.com.#2021-10-0518:59jarrodctaylorInterested to hear more about the multi region requirements but typically a license covers a single system. A system is defined as a single production transactor, its standby and all connected peers and/or clients.#2021-10-0519:04eggsyntaxGotcha, thanks! I knew that it typically meant a single transactor (+ maybe fallback) but didn't know if there was a difference if it was just identical systems in multiple regions.
> Interested to hear more about the multi region requirements
As I understand it (though I'm not expert in this aspect) it's the usual sort of issue where various countries require that certain kinds of data about people have to be stored/processed in either the same country or same region. That's a pre-existing business requirement; I'm considering Datomic for a new project within the business & trying to do an initial cost estimate given the existing requirement.#2021-10-0519:20jarrodctaylorYes, in that case each region would be an independent system so one license each. Feel free to ping if you have any follow up questions.#2021-10-0519:23eggsyntaxGot it. Thanks Jarrod!#2021-10-0701:28onetomIs there some established or off-the-shelf solution to "rename" a Datomic Cloud DB?
I would be fine with downtime and I don't mind losing the :db/txInstant values either, so some kind of dump and restore operation would be perfectly suitable.#2021-10-0712:04BenjaminHi I'm trying to start a local ddb transactor and connect to it
~/datomic/datomic-pro-1.0.6316 $ ./bin/transactor ../ddb-template-local.properties
Starting datomic:<DB-NAME> ...
System started datomic:<DB-NAME>
(def cfg {:server-type :peer-server
:access-key "myaccesskey"
:secret "mysecret"
:endpoint "localhost:8000"
:validate-hostnames false})
(require '[datomic.client.api :as d])
(def client (d/client cfg))
(def conn (d/connect client {:db-name "<DB-NAME>"}))
=> Unrecognized SSL message, plaintext connection?
do you know what I do wrong? How do I configure "<DB-NAME>" ?#2021-10-0712:19favilaThe client api connects to a peer server not a transactor. The “DBNAME” in your client config is set to whatever you want in the command line args that start the peer server.#2021-10-0712:21favilaYour complete system should have at least four processes running if you are using on-prem with the client api: dynamodb local, transactor, peer-server, and the client application#2021-10-0712:31Benjaminah I see I don't have a peer server yet#2021-10-0712:42favilaif you’re using on-prem, you could cut this down to two: transactor with “dev” storage (transactor acts as storage too), and an application using the peer (vs client) api#2021-10-0712:52BenjaminAh do you mean something like this?:
I already played around with that (it worked) now I'd like to make it work on aws#2021-10-0712:53Benjaminnow I have this error when I try to connect a peer to my transactor#2021-10-0809:05BenjaminI went on to skip the local setup and made it work on aws now#2021-10-0813:01Ivar RefsdalI have a query like this:
(d/query
{:query {:find '[?e]
:in '[$ ?v [?i ...]]
:where '[[?e :e/v ?v]
[?e :e/i ?i]]}
:args [(d/db conn)
true
(vec (take 500 (:i params)))]})
And it produces an OutOfMemoryError.
But if I change (take 500 ... to (take 10 ..., it works.
But that makes it a less restrictive query, right? How does that make sense?
Why does this happen?
I know (now) that swapping the order of the where clauses solves this problem.
Could it be possible that something is wrong in the Datomic query planner? Or is there something I'm not getting yet?#2021-10-0813:25Lennart BuitSo the first clause binds all ?e for which :e/v is true.
As far as I understand it, the where clauses are executed for each unique combination of in’s, and the results are union’ed.
Therefore, with [?i …] being 500 items long, you execute that large binding 500 times, whereas with [?i …] being 10 items, you only do it 10 items.#2021-10-0813:29Lennart BuitThere is no query planner in Datomic btw, you need to make sure yourself that the most restrictive clause is first.{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 1")}
#2021-10-0813:29Lennart BuitIn this case you have (empirically) found out that that is [?e :e/i ?i] 😛{:tag :div, :attrs {:class "message-reaction", :title "grin"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😁")} " 1")}
#2021-10-0815:42Ivar RefsdalHehe thank you Lennart 🙂#2021-10-0816:48Ivar RefsdalHow is "the most restrictive clause" calculated? The total number of datoms matched by that clause?#2021-10-0816:53Ivar RefsdalOr should the collection matches be considered special because of union/or?#2021-10-0816:59Ivar RefsdalI guess I am probably not getting something... Here is a gist that demonstrates the OOM though:
https://gist.github.com/ivarref/0d3d34eeeffbc4625d6120727368e405#2021-10-0816:59Ivar RefsdalI am at least surprised to get an OOM in this case#2021-10-0817:22jarrodctaylorAn example demonstrating optimizing for more selective where clause ordering https://github.com/cognitect-labs/day-of-datomic-cloud/blob/master/tutorial/query_perf.clj#2021-10-0818:56Lennart BuitDatomic isn’t necessarily determining whether a clause is most selective beforehand. It just finds all Datoms satisfying the clause.
So as a programmer you need to choose what clause is most selective. Say you look for all ‘males named Lennart’, the more selective clause is ‘named Lennart’. You know, around half of the population is male, and way way less people are named Lennart ^^.#2021-10-0818:59Lennart BuitSo the query would be most efficient if first finding all Lennart' and then filtering them on being male. #2021-10-0819:00Ivar RefsdalI get that ... But what makes a clause the most selective? It matches the fewest datoms?#2021-10-0819:02Ivar RefsdalFrom my gist example when the most selective (matching 1k datoms) is first, it OOMs.#2021-10-0819:03Lennart BuitMatches fewest datoms yeah#2021-10-0819:04Ivar RefsdalOK --- well, I still don't understand then why Datomic would OOM when the most selective clause is indeed first?#2021-10-0819:04Ivar RefsdalOr will this list binding create a cross product somehow?#2021-10-0819:05Lennart BuitYes, I like to think that it iterates through all tuples of in values#2021-10-0819:06Ivar RefsdalOK, but why would a loop of 50k iterations OOM?#2021-10-0819:06Ivar RefsdalI just don't quite see the "reason" for the OOM#2021-10-0819:07Ivar RefsdalI probably need to play more with datalog ...#2021-10-0819:09Ivar Refsdalhttps://docs.datomic.com/on-prem/best-practices.html#collections-as-inputs
I'd like a warning here if indeed you should treat collection inputs as potential cross product OOM-producers#2021-10-0819:11Lennart BuitThis is outside of my knowledge. I don’t know why datomic would struggle binding a thousand entities 10000 times #2021-10-0819:13Ivar RefsdalOK, well thank you for all your input either way 🙂
Not sure what timezone you are in, but here in Norway it's getting late, so I'm off for the weekend.
Have a nice weekend 😎#2021-10-0819:13Lennart BuitEspecially because it happens to be the same set for every iteration#2021-10-0819:13Ivar RefsdalYeah... I get the feeling this is bug, no?#2021-10-0819:24Lennart BuitNah it is ofter datomic outsmarting us in its simplicity{:tag :div, :attrs {:class "message-reaction", :title "clj"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "clj", :src "https://emoji.slack-edge.com/T03RZGPFR/clj/acf97abdead7f2d9.png"}, :content nil})} " 1")}
#2021-10-0819:25Lennart BuitAs in, in a good way#2021-10-0906:39pyry@UGJE0MM0W Just to add to what others have already said, the clause ordering will obviously have an impact on the query performance and size of intermediate results collected by the query. But if the total number of items ultimately returned by your query is too large to fit into the available memory, no reordering of clauses will work.#2021-10-0906:41pyryI'm wondering if you could test your original query using qseq instead of query or q? As qseq is lazy, I think you might be able to make some progress this way.. at least, if the original problem was that the query result was just too large to fit into memory AND you can process the results bit by bit.#2021-10-1110:17Ivar RefsdalHow is OOMing simplicity?
qseq is also OOMing. I've updated the gist with a qseq example.#2021-10-1110:24Ivar Refsdal@UCYS6T599 Please see the gist for a reproducible case:
https://gist.github.com/ivarref/0d3d34eeeffbc4625d6120727368e405
The result set is 1000 entities long.
A manual unrolling of the where worked (it's super slow, but at least it does not OOM):
(let [ids (vec (take 50000 uuids))
db (d/db conn)]
(->> (for [e (d/query
{:query {:find '[[?e ...]]
:in '[$ ?b]
:where '[[?e :e/b ?b]]}
:args [db true]})]
(set (d/query {:query {:find '[?e ?u]
:in '[$ ?e [?u ...]]
:where '[[?e :e/u ?u]]}
:args [db e ids]})))
(reduce set/union #{})))#2021-10-0822:47sebastianHi. Absolute Datomic beginner here.
I am running Postgres in a Docker container and am trying to connect the transactor.
For this I am using the sql-transactor.properties sample file as a base.
How do I configure this to connect the transactor from the host to postgres inside Docker? The transactor throws an error that the hostname (the service name from the docker-compose file) is unknown.#2021-10-0908:33thumbnailHow are you running the transactor, and what is your platform (Mac OS / Linux / … ?)#2021-10-0909:42sebastianI'm on Linux.
I am running postgres via docker-compose, the service is called pg, ports are per usual and I am running the Clojure application as well as the transactor on my host machine. So the transactor is not running inside Docker#2021-10-0909:44sebastianThe transactor config file is derived from the sql sample only adding the license-key and changing the following:
protocol=sql
host=pg
port=5432
sql-url=jdbc:
sql-user=datomic
sql-password=datomic
#2021-10-0909:46sebastianI also created the DB, the datomic_kvs table and a datomic user per the instructions on their website using the SQL statements I've found in the Datomic folder.#2021-10-0909:47sebastianPostgres is running, I can connect to it via an Adminer I am also running with the same docker-compose#2021-10-0909:49sebastianupon starting the transactor, it says system started but then I am getting a stacktrace with the "interesting part" being
Caused by: java.net.UnknownHostException: pg#2021-10-1112:13Hannes SimonssonHi, I'm just starting out learning about datomic and have run into an issue with querying entity id's.
When I run the following query it returns all my activation id:s with all corresponding entity id:s.
(db/q '[:find ?a ?b
:where [?a :activation/id ?b]]
db)
However, if I instead run:
(db/q '[:find ?a
:where [?a :activation/id "some-id-I-know-is-stored-in-the-db"]]
db)
Then it returns an empty list instead of the entity id I expect it to return, there should be only one since activation id is a uuid.
Anyone know what the problem is, have I misunderstood something?#2021-10-1112:25favilaUuid literal syntax in Clojure is #uuid “the uuid”#2021-10-1112:26favilaIs that what you are doing? A string and an actual uuid object are not going to compare equal#2021-10-1112:29Hannes SimonssonThat was it, thank you so much. Been banging my head on this for some time.#2021-10-1113:30kirill.salykinhi
Why there is no query planner in datomic and is there a plan to add it eventually?#2021-10-1113:44Linus EricssonThere is no query planner in Datomic, I think this is because building a good query planner is hard, and also it is not always perfect, which could lead to suprising (very slow) query performance. That said, it is quite easy to build a query planner on your own, but probably best suited for data that you know much about.#2021-10-1115:15lispers-anonymousHow might you go about building a query planner on your own? First you say it's hard, then you say it's easy. Seems like it would be very hard to do on your own without concrete insights into the performance of your queries. In something like postgres you can use analyze and explain directives to help with that (granted, those are for explaining what their query planner is doing). Datomic does not provide anything.#2021-10-1116:14kirill.salykinGood point, no much query stats available#2021-10-1208:33raspasovI think the point was that it’s relatively “easy” to build a query planner for data that you know. General query plan that works well for unknown data is always “hard”.#2021-10-1116:37kirill.salykinWas hoping that having nubank behind datomic may help to bring in some additional features...#2021-10-1117:49dvingoI don't buy the argument that it is too hard of a problem - xtdb features a query https://github.com/xtdb/xtdb/blob/80b79534589b98c899495d86b70c05797e991c37/core/src/xtdb/query.clj#L1664 and has a small team of core contributors.#2021-10-1117:49dvingoI asked in this channel in the past about why there is no query planner and never got a response, so maybe someone will answer this time.#2021-10-1117:59dvingohttps://clojurians-log.clojureverse.org/datomic/2021-06-10/1623362597.191100#2021-10-1118:08vnczI’m curious about this. Why are you looking for a query planner in Datomic? Are you seeing poor performances or anything that makes you want one?#2021-10-1118:43lispers-anonymousOne use case for a query planner would be a single query that can be run against many databases. It's one thing to look at the query and re-order clauses to perform well for one particular database. But when the query runs against multiple databases, the shape and the dimension of data could mean that optimizing the query order for one database causes the query to perform poorly against another. It's very hard to optimize something like this at development time. When you are running a query with a lot of rules that call each other (sometimes recursively) that makes it even more difficult to think about how clauses should be ordered, or to even determine what actual order the clauses are run in the first place. It just feels like something that should be done programatically at run time. The right information is not always available when writing the queries.#2021-10-1118:54kirill.salykinAlso missinh query planner means that it is responsibility of a developer #2021-10-1119:15dvingosome reasons I can think of:
• the optimization of clause ordering can change over time as the # of entities that match a clause can change as the application lives for more time (something that was performant at one point in time may not be in the future)
• developing on a team with a library of rules, where it is not feasible to inspect the clause ordering of all the rules (especially when rules invoke other rules, or invoke subqueries)
• developer ergonomics - the same reasons you want to program in a higher level programming language - these are details a compiler should deal with#2021-10-1119:23dvingoAnd to the original prompt - yes this was spurred by performance problems. There were more than a handful of code changes on a large project related to clause ordering causing poor performance. So stated again, the benefit of a query planner is cost savings - all the debugging and dev time needed to address this (and again addressing it once doesn't mean the problem is "fixed").{:tag :div, :attrs {:class "message-reaction", :title "catjam"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "catjam", :src "https://emoji.slack-edge.com/T03RZGPFR/catjam/50c2b0ff9e925462.gif"}, :content nil})} " 1")}
#2021-10-1119:59Ivar RefsdalWe've also been hit by poor query performance, and I've reproduced an OOM (OutOfMemoryError) with a large list binding here: https://gist.github.com/ivarref/0d3d34eeeffbc4625d6120727368e405
In production this happens (OOM) once two input parameters become a total of ~1000 items.#2021-10-1120:25dvingoAlso, my original question (from June) was not requesting a query planner, I was asking if one was ever considered in datomic, and if so why there isn't one (what are the tradeoffs etc) because I don't see that documented anywhere.#2021-10-1120:41Drew VerleeHow would a query planner work in this case? Inspect your access patterns and DB and try to guess and future cases and re-index and re-order clauses? My naive guess is to why it's not there is because they would be worried about trying to be to smart and creating unpredictable behavior that would backfire.
I would be interested to know if there is anything that prevents us from writing a query planner.#2021-10-1216:27dvingothere's an open source one available here if you want to see one way to go about it:
https://github.com/xtdb/xtdb/blob/master/core/src/xtdb/query.clj#2021-10-1216:43dvingojust discovered this too
https://github.com/xtdb/xtdb/blob/d48ef6b15fdc8833ab5c648f2b1d383cbed5b599/docs/design.adoc#query-engine#2021-10-1214:56stuarthallowayHi @danvingo! We have certainly considered adding a query planner to Datomic, and might do so in the future. A good query planner has an obvious benefit: It makes queries run faster without needing any help from the user. OTOH, as @oscarlinusericsson points out, query planners can sometimes cause surprising and slow results. If there is no user control, this can be catastrophic. If the query planner includes user knobs, the "obvious benefit" becomes less obvious.#2021-10-1216:24dvingoThanks @U072WS7PE ! I appreciate you taking the time to reply 🙂#2021-10-1216:26dvingoPerhaps instead of a "knob" there could be just a "button" (on and off) that can be passed as an option to q#2021-10-1216:41stuarthallowayAgreed. Related: we are certainly interested in specific performance problems as folks encounter them.#2021-10-1318:17Ivar RefsdalRe performance: Here is a gist reproducing an OOM with a simple query:
https://gist.github.com/ivarref/0d3d34eeeffbc4625d6120727368e405
I am already in contact with Jaret B on this one (via support)#2021-10-1215:26Tobias SjögrenDoes anyone know when the concept of immutability within databases arose and who is the originator of this idea?#2021-10-1215:31Alex Miller (Clojure team)isn't immutability the obvious choice? who came up with the crazy idea of mutating data in place?{:tag :div, :attrs {:class "message-reaction", :title "trollface"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "trollface", :src "https://emoji.slack-edge.com/T03RZGPFR/trollface/8c0ac4ae98.png"}, :content nil})} " 1")}
#2021-10-1215:32Tobias SjögrenAgree. Not me.#2021-10-1215:37DenisMcHi,
I’m struggling to get my head around a basic use-case for Datomic (years of relational thinking are taking some time to unwind!). To boil the problem down to its simplest, consider a datomic schema as follows:
{:db/ident :foo/id
:db/valueType :db.type/uuid
:db/unique :db.unique/identity
:db/cardinality :db.cardinality/one}
{:db/ident :foo/foo-description
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
{:db/ident :foo/bars
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many
:db/isComponent true}
{:db/ident :bar/id
:db/valueType :db.type/uuid
:db/unique :db.unique/identity
:db/cardinality :db.cardinality/one
}
{:db/ident :bar/bar-description
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
}
So I have an entity foo that can contain bar entities (in the foo/barscardinality many ident). Now, I need to lookup the parent foo entity using an attribute from the child bar entity (in this case, bar-description. The way I am trying to achieve this right now is with the following code:
(def foo-id (UUID/randomUUID))
(def bar-id (UUID/randomUUID))
(def foo-data [{:foo/id foo-id :foo/foo-description "foo description 1"}])
(def bar-data [{:bar/id bar-id :bar/bar-description "bar description 1"}])
(d/transact conn {:tx-data schema})
(d/transact conn {:tx-data foo-data})
(d/transact conn {:tx-data [{:foo/id foo-id :foo/bars bar-data}]})
(d/q '[:find ?foo-desc
:in $ ?bar-desc
:where [?bar :bar/bar-description ?bar-desc]
[?foo :foo/bars ?bar]
[?foo :foo/foo-description ?foo-desc]]
db ["bar description 1"])
Given the transacted data, I would have expected the final query to return “foo-description-1", but instead I am getting an empty list. I’m clearly missing something here but I’ve spent the entire day at this point trying to figure it out, so maybe someone here could point out where I’m going wrong. Thanks in advance!#2021-10-1215:54jarrodctaylorIn your query I believe you want to pass "bar description 1" as the argument not ["bar description "]
For your parent lookup functionality you probably will want to use a https://docs.datomic.com/cloud/query/query-pull.html#reverse-lookup if you haven’t experimented with it already.
Something along the lines of:
(d/q '[:find (pull ?e [{:foo/_bars [*]}])
:in $ ?bar-desc
:where [?e :bar/bar-description ?bar-desc]]
(d/db conn) "bar description 1")#2021-10-1215:56Philremove the square brackets from `["bar description 1"]` in the query (and use (d/db conn) to see the "new" db#2021-10-1215:56favilaYour db in the query arg, does it include the results of the transaction? Where was it def-ed? Remember databases are immutable. Consider using the db-after from the d/transact call#2021-10-1316:25DenisMcThanks for the feedback, that was the problem alright. I had a separate issue with the underlying code from where I derived this simple example which I have also fixed now. Seems to be working well so far.#2021-10-1215:58Philis there any way to reverse the effect of (datomic.dev-local/divert-system … ?#2021-10-1414:37jaretNo, there is no way to un-divert in process.#2021-10-1216:34prncHi,
I’ve updated to newest version of datomic cloud (ions),
{,,,
"group-cft-version":"936",
"group-cloud-version":"9118",
,,,
"status":"running"}
In release notes: https://docs.datomic.com/cloud/changes.html#936-9118, I’m seeing “Upgraded Clojure to 1.10.3”, yet on ion-dev push, I’m seeing overrides (“The :push operation overrode these dependencies to match versions already running in Datomic Cloud. To test locally, add these explicit deps to your deps.edn.“) to org.clojure/clojure #:mvn{:version "1.10.1"}, I’m not sure how to interpret this? There is this discrepancy on other dependencies as well, it’s quite confusing 😕#2021-10-1218:08prncIs there a way to check what’s actually on the classpath of an ions system? From the above I gather e.g. that it’s running clojure 1.10.1 and is supposed ot run 1.10.3 according to the changelog, am I misinterpreting?#2021-10-1219:45Daniel JompheYou probably didn't upgrade ion-dev tool used to :push and :deploy.
AFAIR it's that tool that checks the deps to warn about overrides.#2021-10-1219:47Daniel JompheIn other words, that tool uses a static list of deps and versions, it's not a truly dynamic check.#2021-10-1219:56prnchmm, thanks for an idea, but I have I’m on `com.datomic/ion-dev {:mvn/version "1.0.294"}` which is the latest#2021-10-1317:31Daniel JompheIf you check the first ~50 messages logged in the CloudWatch logstream for your app after a deployment, a few of them contain lots of details and also show the classpath. Those that are about how the system is booted up. (The logstream search field is sometimes useful but can also trip you up.)#2021-10-1317:45prncAlright! that is a very good tip indeed, will read through that carefully. Thanks Daniel, really appreciate your help!#2021-10-1317:46Daniel JompheAnd I felt sorry to not provide more guidance! Thanks for the appreciation. I'm in a rush... 🙂#2021-10-1318:46prncThis is plenty helpful. Overall datomic is very logical & docs are decent, but sometimes I come across some more opaque corner especially around ops. Community is quite small and it’s not open source so it’s not always easy to find answers, that’s I’m particularly grateful to anyone who gets out of their way to help 🙂 So, Cheers 🍹{:tag :div, :attrs {:class "message-reaction", :title "clinking_glasses"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🥂")} " 1")}
#2021-10-1219:56prnchmm, thanks for an idea, but I have I’m on `com.datomic/ion-dev {:mvn/version "1.0.294"}` which is the latest#2021-10-1309:56heliosWe're trying out Datomic Analytics (Presto) and are puzzled by something:
• The DB has a few tens of millions of datoms
• We're running on a machine with 64GB RAM and 16 cores
• Transactor is configured with 6GB RAM
• Peer server is configured with 30GB RAM
• Presto is running with 20GB RAM
We have 4 "tables" in our metaschema. We have attributes of type refs that in SQL-world can be used to perform joins.
• When we run a simple lookup using a datomic query (attribute value) it's of course instantanous (both with the datomic api and the client api against the peer server).
• The same query on presto (again, just lookup by column value) takes 5 seconds.
Performing a more complex query that "joins" in datomic on this attribute is also instantenous, and the SQL equivalent takes around 4 minutes. With logging it looks like it's scanning the whole table rather than relying on indices.
What are we doing wrong? Is there anything else we can do to rely on indices? The fact that it's SO much slower than datomic feels a bit unexpected (slower yes, but by this much it makes it unusable){:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 2")}
#2021-10-1312:14stuarthallowaySome general points:
• Presto is ill-suited for low latency lookups of a single object, so that will never be competitive.
• There is likely an inflection point where Presto will win, at a much bigger database size than what you are describing.
• We are aware of substantial opportunities to improve performance on queries, so this will get better in a future release.
All that said, we would be happy to learn more about your use case and see if there are ways to make it faster today.#2021-10-1405:48heliosThank you @U072WS7PE 🙂 What do you need to learn more about our use case?#2021-10-1408:23heliosWe're building a POC for a customer with Datomic and their existing BI tools rely on SQL and they also use it for manual queries as well. So we're investigating using Datomic Analytics so the customer can evaluate it#2021-10-1413:50jaret@U0AD3JSHL You can share these details with me via support ticket (<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>), but we would be interested in where specifically you encounter performance issues and what your specific POC business requirements are. Using presto/trino clusters and tuning we have found we can tune performance well (adding more nodes for parallel work). As Stu said we are aware there are substantial opportunities to improve performance of queries. In terms of getting faster today... we have had great success with is using https://trino.io/docs/current/connector/memory.html. That allows you to create a virtualized result set, an in-memory snapshot of whenever you issued the query that is held in memory on the machine. You can virtualize select * queries or queries for specific columns. This process can be run in a loop to be nearly current (i.e. stale by whatever the execution time is). Then you can point your queries at this result set for best performance.#2021-10-1314:23octahedrionCan I use dev-local with analytics ?#2021-10-1411:28Ivar RefsdalI am occasionally getting some really long :kv-cluster/get-val read times, such as
DEBUG {:event :kv-cluster/get-val, :val-key "60a5e1e2-b81a-49a1-9161-b19df11c3934", :msec 965000.0, :phase :end, :pid 26, :tid 190}
Why would this happen? What could explain this?
Zero services, other than datomic backend, have problem reading from postgres.
This is on-prem and the storage is postgres.#2021-10-1415:37Daniel JompheThis query fails to run. It fails to marshal because it contains a regex.
Execution error at com.cognitect.transit.impl.AbstractEmitter/marshal (AbstractEmitter.java:194).
Not supported: class java.util.regex.Pattern
Can we add a transit handler for Datomic to successfully marshal and unmarshal the regex?
Or is there a simpler solution?{:tag :div, :attrs {:class "message-reaction", :title "white_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✅")} " 1")}
#2021-10-1415:37Daniel Jomphe#2021-10-1415:37Daniel JompheIf I'm right:
• This works fine in dev-local and when the app is deployed as an Ion inside the Datomic Cloud cluster. No marshalling is required in such environments.
• But it doesn't work when the app runs outside the cluster and connects remotely to a DB. Marshalling is required in such environments.#2021-10-1416:11Lennart BuitCan you pass the regex as string, then use re-pattern to compile?#2021-10-1416:13Lennart BuitAlthough the regex api is a bit imperative, wonder how that goes…#2021-10-1416:15Lennart Buit(Note that you can’t do nested exprs, you need to bind single exprs one by one)#2021-10-1417:49Daniel Jomphe@UDF11HLKC that's something I tried, but no, we can't. Even a hardcoded search term like this one makes it throw the same error.
'[:find (pull ?cpl cpl-repres)
:in $ cpl-repres ?tenant-id ?search-str
:where [?tenant :tenant/name ?name]
[(.toLowerCase ^String ?name) ?lower-name]
[(re-find #"(?i)si" ?lower-name)]
...]
#2021-10-1417:54Daniel JompheOr, for sure, we could install a transaction function just for the purpose of instantiating a regex pattern "server-side" out of args passed as strings, but again, is there a simpler solution or a config knob for transit handlers? Edit: oh no, a tx-fn isn't a solution since this is a query...{:tag :div, :attrs {:class "message-reaction", :title "white_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✅")} " 1")}
#2021-10-1420:57Lennart BuitI meant constructing the pattern inside the clauses of the query:
(d/q '{:find [?match]
:in [$ ?pattern-string ?input]
:where [[(re-pattern ?pattern-string) ?pattern]
[(re-find ?pattern ?input) ?match]]}
(d/db conn)
"\\d+"
"abc12345def")
=> #{["12345"]}#2021-10-1420:58Lennart Buitalthough this is all in-process, so YMMV#2021-10-1814:37Daniel Jomphe@UDF11HLKC, this does indeed seem to work!
Thanks a lot for taking the time to come back!#2021-10-1814:39Lennart BuitYeah, so pattern instances can’t be serialized, and we circumvent that by passing a string and compiling the pattern on the peer. You have access to the majority of clojure.core on the peer, after all#2021-10-1814:50Daniel JompheYes. It now makes sense to me. My Datomic-query-fu is quite dusted/rusted.{:tag :div, :attrs {:class "message-reaction", :title "tada"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🎉")} " 1")}
#2021-10-1416:33popeyeI am new to datomic and I am doing self running , going through https://docs.datomic.com/cloud/dev-local.html and I installed it and configured local storage#2021-10-1416:34popeyeadded the data sample downloaded and running a simple application as below#2021-10-1416:34popeye(ns datomic-practice.core
(:require [datomic.client.api :as d]))
(def client (d/client {:server-type :dev-local
:system "datomic-samples"}))
(println (d/list-databases client {}))#2021-10-1416:44popeyeUnzip the datomic-samples zip into your - i have completed this step and ran ./install#2021-10-1416:46FredrikDo you have the .datomic/dev-local.edn file?#2021-10-1416:52popeyeyes#2021-10-1416:53popeyeit is in /home/g/.datomic folder#2021-10-1417:09FredrikAnd it's content is
{:storage-dir "/home/g/SOME-FOLDER"}
and you have unzipped the samples into
"/home/g/SOME-FOLDER/datomic-samples"
#2021-10-1417:09FredrikIs this the exact same structure you have?#2021-10-1418:01popeyeYes sir#2021-10-1416:34popeyebut result is empty, anything wrong I am doing?#2021-10-1417:54Daniel JompheOr, for sure, we could install a transaction function just for the purpose of instantiating a regex pattern "server-side" out of args passed as strings, but again, is there a simpler solution or a config knob for transit handlers? Edit: oh no, a tx-fn isn't a solution since this is a query...{:tag :div, :attrs {:class "message-reaction", :title "white_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✅")} " 1")}
#2021-10-1508:26Kris CHi, new to Datomic, trying to set it up with postgres storage. Transactor starts OK, but I have a problem starting the peer server, the error I am getting is "Could not find datomic in catalog". Any hints?#2021-10-1508:27thumbnailI'm not sure if this is the problem; But the database should be created before booting the peer-server.#2021-10-1508:27Kris Cit is created#2021-10-1508:27Kris Ctransactor starts ok#2021-10-1508:54Kris Cah, found the problem: you first need to create the (datomic) database by hand as explained here:
https://docs.datomic.com/on-prem/peer/peer-getting-started.html#2021-10-1508:54Kris CPeer Server does not create or delete databases and must be connected to an already-existing logical database within the Datomic system.
#2021-10-1615:47sebastianHey. How are you running your Postgres? Locally or inside a Docker container?
Ia was trying to create a connection from my Clojure app on the host to Postgres in Docker which fails.
So I am hoping for a basic config that works -.-#2021-10-1808:41Ivar Refsdalsebastian: Have you tried setting ALT-HOST in the transactor properties file?
https://docs.datomic.com/on-prem/operation/deployment.html#peers-fail-connect-txor#2021-10-1810:12Kris C@U4LN72X44 I am running it locally, had no problems but the one I described..#2021-10-1818:11sebastianthanks for the pointer. I'll try that.#2021-10-1509:18kirill.salykinhi
is there a way to attach a tx function to every tx when the some attribute is being updated (w/o specifying it explicitly as part of tx)?
for example to enforce the constraint
order/state can be one of #{:pending :rejected ...}
similar to what sql does on constraint check#2021-10-1511:10Ivar RefsdalHow about:
https://docs.datomic.com/cloud/schema/schema-reference.html#attribute-predicates ?#2021-10-1511:11Ivar Refsdal(I haven't used it myself though)#2021-10-1513:05kirill.salykinthats it! thank you so much!{:tag :div, :attrs {:class "message-reaction", :title "orange_heart"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🧡")} " 1")}
#2021-10-1907:41Kris CHow can I start Datomic (in-memory) within a JVM application (Java/Kotlin) for integration testing? Are there any docs regarding this?#2021-10-1912:32Linus Ericssonin On-Prem: Use a database uri on the form "datomic:<mem://some-mem-db-1>" and make d/create-database etc just as if it was an external db with a transactor.#2021-10-1914:28souenzzo;; on-prem
(defn memory-conn
[db-name]
(-> (str "datomic:mem://" db-name)
(doto d/delete-database
d/create-database)
d/connect))
;; cloud
(defn memory-conn
[db-name]
(let [client (d/client {:server-type :dev-local
:system (str db-name)})]
(-> client
(doto (d/delete-database {:db-name (str db-name)})
(d/create-database {:db-name (str db-name)}))
(d/connect {:db-name (str db-name)}))))
(let [;; Do not be afraid to generate many names
conn (memory-conn (UUID/randomUUID))
{:keys [db-after]} (d/transact conn tx-schema)]
(do-thing {::conn conn} ...))
Also take a look at https://github.com/vvvvalvalval/datomock#2021-10-2007:47Kris CThanks! 👍#2021-10-1919:36Daniel JompheUpgrading Datomic Cloud storage will delete too many resources, why?#2021-10-1919:37Daniel JompheFrom 884-9095 to 936-9118, switching Reuse Existing Storage from false to true since this is this stack's first upgrade ever.
This yields a very dubious change set, wherein important EFS, DDB and etc. resources are to be removed.#2021-10-1919:39Daniel JompheContinuing...#2021-10-1919:41Daniel JompheIf I flip the argument to Reuse Existing Storage = false, then it yields a more reasonable changeset. It's as if it yields the reverse of what it should yield.#2021-10-1919:43Daniel JompheI manage several Datomic Cloud systems in different AWS accounts.
An hour ago I upgraded my first environment. It yielded a 2-changeset like the last screenshot above. The upgrade went fine.#2021-10-1919:43Daniel JompheWhy is it that this other AWS account's CloudFormation seems to read the reverse of my Restart/Reuse Existing Storage parameter?#2021-10-1919:45Daniel JompheIt's now been one year that I track each Datomic Cloud release in a few AWS accounts.
It's the first time I see the upgrade process propose to do the reverse of what we want in an upgrade.#2021-10-1919:48Daniel JompheThis is one of our newest AWS accounts, and this CloudFormation stack's first upgrade. Before I came to update it, its Reuse Existing Storage was still at its pristine false value, and I obviously switched it true but the ChangeSet yielded showed me CF is about to delete my storage if I proceed.
I suspect the new CF Yaml to react badly to our changing the Reuse Existing Storage parameter on a never-before-upgraded stack!?#2021-10-1919:49Daniel JompheThe other account I successfully upgraded an hour ago already had its Reuse Existing Storage param set to true during last summer's update. And it went fine without proposing to delete my storage.#2021-10-1919:50Daniel JompheThanks for any help. Should I proceed asking for the reverse of what I want so that I get what I truly want? A quick read of the CF template tells me this might be a bad idea, even though the change set then looks reasonable.#2021-10-1921:12Daniel JompheTo help not duplicate efforts with Cognitects, please know that I submitted a support request (#3334).#2021-10-2514:40Daniel JompheFor posterity's sake, we found out that since this is a new stack that was never upgraded before, the change set built is bigger (compared to the other stack that had already gone through an update). That's normal.
So I applied the update as instructed and all went fine...#2021-10-1919:43Daniel JompheWhy is it that this other AWS account's CloudFormation seems to read the reverse of my Restart/Reuse Existing Storage parameter?#2021-10-1921:02TyThis is not a datomic-specific question but rather a bit more general. I've been eyeing some of the RDF implementations from afar for a while, and I've finally wrapped my head around them over the last month or so. I've hit a point recently where I've started to think that RDF (and when I say RDF I specifically mean something like Datomic/XTDB flavors of RDF) can be used to model any type of data. So my questions to folks who are more familiar with it and have battle-tested it in the real world:
1. Where, if anywhere, does this approach fall down from your perspective?
2. What are the most important lessons that you wish you had learned earlier when taking this type of approach to domain modeling?
3. Do you think this type of approach can model any data domain? If so, is it actually the best fit for everything? Or are there situations in which it doesn't make sense to use.#2021-10-1921:12favilaxtdb is actually a document store…#2021-10-1921:13Alex Miller (Clojure team)I am not a Datomic expert, but have done several projects with it, and also worked for several years building RDF-based products. I find Datomic EAV to be a fantastic fit for things that are "mostly tabley" (entities that mostly have the same set of attributes but maybe sparse), but a little graph-y (particularly hierarchies). In general, I think those capabilities are extremely flexible for modeling a wide range of human information systems. For working with very structured data (columnar, aggregate star schema rollups, time series, etc) designed for specific access patterns and very high performance, those custom fit structures are probably going to be better than the relational approach. And on the other end, things that are very graphy (non-hierarchical networks), probably graph-first dbs have support for things like nearest-neighbor queries that would not be as good in Datomic.#2021-10-1923:00Michael Stokleythis query
'[:find ?e
:in $ [?vs ...]
:where
(not (not [?e :some-card-many-attr-of-type-ref ?vs]))]
would mean something like "get me any entity ?e that is not related to any v
outside of the group ?vs", is that right? "exclude any ?e related to a v outside of ?vs"#2021-10-2004:35Tobias SjögrenSay I want to define the schema attribute “first_name” in a database - what would the needed datoms look like (in storage)?#2021-10-2004:59tatutsomething along the lines of https://github.com/solita/mnt-teet/blob/master/app/backend/resources/schema.edn#L55#2021-10-2005:35Tobias SjögrenThese are not actual Datoms in storage, right? That is what I’m looking for - to see the actual triples..#2021-10-2006:19tatutthe map format is a convenient way to express multiple facts about a single entity, so it's not exactly the datoms in storage#2021-10-2006:20Tobias SjögrenWhat I tried to ask is how the datoms would look like in storage..#2021-10-2006:26tatutfor that particular case, it looks like the below... I have no idea how they are actually represented in storage
(d/q '[:find ?e ?a ?v :where [?e ?a ?v] :in $ ?e] (db) :user/given-name)
=> [[:user/given-name 40 23]
[:user/given-name 10 :user/given-name]
[:user/given-name 41 35]
[:user/given-name 63 "User's given name"]]
#2021-10-2006:38Tobias SjögrenI wonder how accurate this is..#2021-10-2006:58tatutafaict, entities are always identified by the 64bit number, but the number :db/id and keyword :db/ident is often used interchangeably when showing info#2021-10-2007:22Tobias SjögrenThat fits with the example database here: https://docs.datomic.com/cloud/time/filters.html
I guess the actual stored entity id is the 64bit number and :db/id and :db/ident are just aliases..#2021-10-2012:44favilaYou can see the raw datoms using d/datoms#2021-10-2012:46favilaIt’s a tuple of [e a v tx op], where e, a, tx are entity ids (longs) op is a boolean (true=assert, false=retract) and V is an object whose type depends on the valuetype of A#2021-10-2012:48favilanote that :db/id is just syntax for the map form, it’s not an ident. There’s no attribute :db/id nor is there a datom corresponding to it. it’s just the E that everything in the map has in common{:tag :div, :attrs {:class "message-reaction", :title "thinking_face"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 1")}
#2021-10-2014:45Tobias SjögrenI actually don’t use Datomic (yet) - I’m trying to recreate the basic functions in FileMaker to understand everything before probably moving on to using Datomic..
So “a” is also a long and referencing the attribute somehow? If I could see a “entity id entity”, an “attribute entity” and a “transaction entity” as raw data Datoms it would be great.. Your last message I didn’t quite understand..#2021-10-2015:01favilaattributes are entities#2021-10-2015:02favila“entity id entity” is nonsensical#2021-10-2015:03favilathe unit of truth in datomic is a datom, which I described. Everything you see that looks like a map and represents an “entity” is merely a projection into/out-of those datoms#2021-10-2015:05favilaentities don’t exist in the system as entities---they are merely “the datoms which share an E value”#2021-10-2015:08Tobias SjögrenRight.
What would an “attribute entity” look like as raw datom(s) ? The “:db/ident” is not really the raw data, right?#2021-10-2015:08favilawhat do you mean by an “attribute entity”?#2021-10-2015:09Tobias SjögrenA datom where you define an attribute.#2021-10-2015:11Tobias SjögrenYou said the “A” were entity ids (longs)..#2021-10-2015:11favilaAn entity becomes a legal attribute by being asserted on the db entity’s :db.install/attribute attr, e.g. [:db.part/db :db.install/attribute <ATTR> <some-tx> true]#2021-10-2015:12favila(In fact you used to have to do this explicitly in the transaction, it’s implicit now)#2021-10-2015:24Tobias SjögrenEvery “A” is an entity id ? So “:db/ident” is a representation of an underlying entity id? If I add the :person/name-first attribute e.g., this attribute will be represented by an entity id in a datom?#2021-10-2015:26Tobias Sjögren(layman warning...)#2021-10-2015:27favilaevery attribute is an entity. :db/ident is a way to “name” an entity so that a keyword where an entity id is expected will resolve to that entity. (this is called “entity lookup”, and there’s one more way to do it using a unique attribute and the syntax [:unique-attr unique-value])#2021-10-2015:27favila:db/ident itself is an entity#2021-10-2015:28Tobias Sjögrenrepresented by some datom and an entity id?#2021-10-2015:29favilaIf you really want to understand how datomic bootstraps itself, I recommend looking at the earliest transactions and seeing how they establish the foundation for the system. You can see them with this:
(d/create-database "datomic:")
(-> (d/connect "datomic:")
(d/db)
(d/seek-datoms :eavt 0)
(->> (group-by :tx)
(sort)
vals))
In particular, you’ll notice that entity 10 is the :db/ident attribute itself, and it names itself self-referentially with datom [10 10 :db/ident 13194139533312 true] . Entity 0 is the database, entity 13 is the :db.install/attribute attribute, and attribute installations have the pattern [0 13 <entity-id-of-attribute> TX true]#2021-10-2015:30Tobias Sjögrenaha!#2021-10-2015:31Tobias SjögrenIs there a place to read all those “foundation datoms” without having Datomic?#2021-10-2015:31favilano#2021-10-2015:33Tobias SjögrenBut with the right Datomic command, you can see all of it?#2021-10-2015:33favilaIt’s what I just pasted above#2021-10-2015:33favilaor query#2021-10-2015:33Tobias Sjögrenright#2021-10-2015:34favilathese are “normal” datoms, there’s no extra meta “DDL” layer here made of different stuff#2021-10-2015:34favilathe data are datoms and the schema are datoms#2021-10-2015:34Tobias Sjögrenbut the entity ids are hard coded for these foundation stuff?#2021-10-2015:35favilayes, but since they all have :db/idents, it doesn’t matter#2021-10-2015:35favilaeverything refers to them via ident#2021-10-2015:35favilathat’s what gives them meaning to code that reads or manipulates these entities#2021-10-2015:37favilaTo blow your mind a bit more, entity ids are composed of partition in the high bits and a T value in the low bits. The partitions are also entities, and there are three predefined ones (0=db, 3=tx 4=user). so entity ids have entity ids inside them too#2021-10-2015:38faviladb = 0, which is why these bootstrap entities are small numbers, because all the high bits are clear#2021-10-2015:38favilabut transaction ids are large numbers#2021-10-2015:39favilad/entid-at lets you construct entity ids#2021-10-2015:51Tobias SjögrenSo, during installation, datoms are added to storage that represent the foundation for each subsequent added datom? In this way about everything consists of datoms?#2021-10-2015:59favilathey come with a newly-created database#2021-10-2212:25Tobias Sjögren@U09R86PA4 When you say “V is an object whose type depends on the valuetype of A” I’m not really following what you mean by “V is an object” - in what sense is it an object?#2021-10-2212:26favilaThe Java sense#2021-10-2212:27favilaIt’s a Java reference type in the Datom type, not a primitive type#2021-10-2212:28favilaThe E A Tx op fields are all known fixed primitive types#2021-10-2212:37Tobias SjögrenI don’t quite get that..
If I put the string “Francis” as the V - it is not anything else than just a string?#2021-10-2212:39favilaThe data type Datom must allow any legal type in the system in the V slot#2021-10-2212:40favilaSo in the java implementation that field must be type Object#2021-10-2212:40Tobias Sjögrenok#2021-10-2212:41favilaThat’s all I mean. I’m not making any deep statements here about triple modeling#2021-11-1013:18Tobias Sjögren@U09R86PA4 When you wrote [0 13 <entity-id-of-attribute> TX true], the datom order is A, E, V, TX, OP - right ?#2021-11-1013:22favilaDatom slots are E A V Tx Op#2021-11-1013:36Tobias SjögrenThat was what I excepted when I saw these “foundational datoms”:
([10 0 :db.part/db 13194139533312 true]
[11 0 0 13194139533366 true]
[11 0 3 13194139533312 true]
[11 0 4 13194139533312 true]
[12 0 20 13194139533366 true]
[12 0 21 13194139533366 true]
[12 0 22 13194139533366 true]
[12 0 23 13194139533366 true]
[12 0 24 13194139533366 true]
[12 0 25 13194139533366 true])
But from what I can understand this is A E V Tx OP…#2021-11-1013:44favilaIt looks like it might be. What code made this?#2021-11-1013:44Tobias SjögrenI actually don’t know..#2021-11-1013:46Tobias SjögrenHere’s the full batch..#2021-11-1013:47Tobias SjögrenIf that is not A E V Tx OP - I have to conclude that I don’t understand this at all…#2021-11-1013:57favilaThese aren’t even datoms, it’s one long vector#2021-11-1013:58favilaWhere did you get this that you don’t have the code that made it?#2021-11-1013:59Tobias SjögrenFrom someone that, unlike me, have Datomic installed..#2021-11-1013:59Tobias SjögrenThe content of the vector are datoms in a specific slot order, right?#2021-11-1014:03favilaThe txt file you provided has only one vector in it. I guess you could presume based on line breaks and order and other clues that this is a cycling pattern of slots, but these are not datoms. As you noticed, it doesn’t seem to be E A V T Op#2021-11-1014:05Tobias SjögrenIf so, I’m still looking to get the “foundational datoms” ...#2021-11-1014:06favilaso, install datomic, run the code above#2021-11-1014:06favilaWhy reverse engineer datomic from slack conversations?#2021-11-1014:09Tobias SjögrenI know I should install Datomic. I need help to do it though. The amount of time it would take me to do it alone would make me loose focus on what I’m trying to achieve..#2021-11-1014:10favilawhat are you trying to achieve?#2021-11-1014:11Tobias SjögrenPossibly just as an intermediate step: Represent the core principles and ideas behind Datomic in my current platform (FileMaker).#2021-11-1014:13Tobias SjögrenDoing so will hopefully allow me to be sure if I should leave it (FileMaker) or not..#2021-11-1014:17favilaimplementing an immutable graph database in a relational database seems like a much, much bigger ask than installing and evaluating datomic (or any other dbs you may be considering) on its own merits.#2021-11-1014:21Tobias SjögrenFor sure. It’s just - I want to do it.. I want to see how far I can reach. So far it looks promising. It will be far from as performant as Datomic, but possibly still usable..#2021-11-1014:26favilaok, then maybe getting the “foundational datoms” of datomic is a distraction. Think about the minimum you would need to self-describe the system before you could implement the code that emits datoms from transaction data. You only need to know enough to perform ident lookups, and it needs some known idents that describe transaction and schema-related operations#2021-11-1014:31Tobias SjögrenI would agree to that they are partly just a distraction, but it makes me think about that self-describe thing.. I’m currently trying to construct an interface for constructing the schema attributes..#2021-10-2008:00Tobias SjögrenBy looking at storage data, how do I know when an entity id (in the E position of the datom) represents a transaction id?
Is it when the datom attribute is “:db/txInstant” ?#2021-10-2009:07Linus Ericssonexample - transaction:
[[1 :cat/name "bob" 1001 true]
[1001 :db/txInstant <date> 1001 true]
[1001 :transaction/source "internet" 1001 true]]
As you can see, there can be more datoms on the transaction eid.
One way to know is to find out if there is a :db/txInstant datom with E=1001.
Then E=1001 is a transaction eid.
There is also the concept of different database partitions, which is mostly an implementation detail, but if you do
(part E) you will get back yet another eid, where (ident db (part E)) results in
:db.part/db if it is a db-internal thing (partition, schema and some more things)
:db.part/user as the default for your ordinary data and
:db.part/tx if the E a transaction.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-10-2009:13Linus Ericssonhttps://docs.datomic.com/on-prem/query/indexes.html#partitions#2021-10-2010:50Tobias SjögrenEssentially you are saying yes to my second question?#2021-10-2010:51Linus Ericssonno.#2021-10-2010:52Linus EricssonThe datom [1001 :transaction/source "internet" 1001 true] has a transaction E without having :db/txInstant.#2021-10-2010:53Linus Ericssonthe entity 1001, however, has an attribute :db/txInstantwhich (I think) is equal to being in a transaction entity at all times.#2021-10-2012:11Ivar RefsdalCan't you simply check if E equals T in the (EAVTO) datom?
Correct me if I'm wrong please.#2021-10-2012:12Linus EricssonThat would not work in the general case - other transactions are allowed to update older transacton entites (except for the special attribute :db/txInstant)#2021-10-2012:17Ivar RefsdalHm, how about checking the existence of [?e :db/txInstant _ ?e true] in the history database?#2021-10-2012:50favilaat least in on-prem, transaction entity-ids are in the transaction partition (3), which you can find via (d/part entity-id){:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-10-2012:51favilathat doesn’t tell you that an entity id is an actual transaction, just that it could be. But if you found this entity id in the E of a datom you can be pretty confident that it is unless you have been asserting on entity ids without minting them#2021-10-2012:52favilaif you want to be doubly sure, look up (d/datoms db :aevt :db/txInstant entity-id) if you find an assertion, it’s a transaction#2021-10-2012:53favila:db/txInstant is special--only transactions have it, and you can’t remove it.#2021-10-2016:55Michael Stokleyi don't want to unify when entity ?e has values not specified by a given set. is there a way to express that with datalog? "give me all the companies that don't have offices outside of Virginia and North Carolina"#2021-10-2017:03FredrikYou can use ground to bind a variable to specific values within a query
https://docs.datomic.com/on-prem/query/query.html#ground#2021-10-2017:09FredrikSimple example
(d/transact conn-mem [{:sensor/humidity 124.0}
{:sensor/humidity 125.0}
{:sensor/humidity 126.0}])
(d/q '[:find [?a ...]
:where
[?e :sensor/humidity ?a]
[(ground #{124.0 125.0}) [?a ...]]]
(d/db conn-mem))
=> [124.0 125.0]
Is this what you want?#2021-10-2017:12Michael Stokleyi want the complement of that, sort of#2021-10-2017:12Michael Stokley"exclude any sensor that has any readings that /don't/ fall into this range"#2021-10-2017:13Michael Stokley"only return sensors for which ALL readings fall within a given range"#2021-10-2017:14Michael Stokley(assuming there are multiple sensors each with multiple readings)#2021-10-2017:41FredrikI see what you mean. You want to take the set difference with those entities having at least one value outside a prescribed set.#2021-10-2017:43FredrikModyfing the above example slightly, this was the most concise thing I could get
(d/transact conn-mem [{:db/id "sensor 1" :sensor/humidity 124.0}
{:db/id "sensor 1" :sensor/humidity 125.0}
{:db/id "sensor 2" :sensor/humidity 124.0}
{:db/id "sensor 2" :sensor/humidity 100.0}
{:db/id "sensor 3" :sensor/humidity 125.0}])
(d/q '[:find [?e ...]
:where
[?e :sensor/humidity]
(not-join [?e]
[?e :sensor/humidity ?a]
(not
[(ground #{124.0 125.0}) [?a ...]]))]
(d/db conn-mem))#2021-10-2017:50FredrikThe not-join clause removes from consideration any entity satisfying the body, and differs from not in that you can select which variables needs to be pre-bound in the body. It looks like a double negation of my first example, but instead of looking at each E-A pair, it considers only the entity.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-10-2018:44Michael Stokleythanks, i'll try this out#2021-10-2018:45Michael Stokleyi was trying a not join but it didn't occur to me to have a new humidity lvar unbound to surrounding scope - and it didn't occur to me to use ground#2021-10-2107:45kirill.salykinis there any possibility to have explain functionality? to understand where the query spends most time?
I understand that there is no query planner (yet?), but having some explanation would help with manual query optimization{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 2")}
#2021-10-2110:03kirill.salykinno i didnt, thanks!#2021-10-2212:24Ivar RefsdalI haven't seen (contains? <input> <bound-variable>) recommended for matching a list of input values.
On my machine this uses only 10% of the time a regular list binding does.
Example here:
https://gist.github.com/ivarref/0d3d34eeeffbc4625d6120727368e405#2021-10-2109:25Kris CWhat is the datomic way of getting the latest/newest entity of some type that has a :date attribute?#2021-10-2109:40Lennart BuitWould the max aggregate function help you?
https://docs.datomic.com/cloud/query/query-data-reference.html#aggregates#2021-10-2113:39Kris CNo, it doesn't #2021-10-2113:48Lennart BuitMy suggestion would be something like this:
(d/q {:find [?e (max ?date]
:where [[?e :date ?date]]}
db)
Note this is a database scan#2021-10-2113:55Kris CI have tried something like this, but the date was not the "max date"#2021-10-2113:56Kris Cis it best practice for such cases to write a custom aggregate function and use it in the :find clause?#2021-10-2113:58Kris CIf I use the (max ?date) in :find, I get the results by :date ascending#2021-10-2114:02Kris Csame if I use (min ?date)...#2021-10-2212:56hdenThe answer depends on your definition of newest, for example if you define newest as
(d/q {:args [db]
:query '[:find (max ?tx)
:where
[?e :date _ ?tx true]]})
then you can work from there. Retrieving all the entities that were touched in the same transaction using tx-range.
https://docs.datomic.com/cloud/time/log.html#tx-range#2021-10-2110:51Ivar RefsdalWe are seeing occasionally, once or twice a day, that kv-cluster/read-val takes slightly over 960 000 milliseconds.
It's almost always this "magic number" (of 960 000 ms or 16 minutes), normally it takes just a few milliseconds.
The segments are not large.
Anyone have experience with this scenario and/or have tips on how to fix it?
We are running the datomic on prem transactor (1.0.6344) in the azure cloud.
Our backing MS postgres server has 3000 IOPS available.
I am considering trying changing datomic.readConcurrency to a lower default value.
Edit: And/or do anyone have experience in reproducing such a problem?
Is it some simple way to clear the local datomic cache to make every queries/pulls read (a lot) of data?#2021-10-2114:34favilaIs the key of the read value consistent? You can look up that key in your Postgres table to see if it’s unusual in some way. I’ve also seen abnormally large fetches caused by gc pauses. Is there gc pressure on this peer? Or maybe this is a driver or Postgres timeout#2021-10-2115:26Ivar RefsdalHm. What happens if you try to read too much?
After push datomic.readConcurrency=2 we are now (currently) seeing a bunch of "late responses"#2021-10-2115:27Ivar RefsdalWith stacktraces such as:
pool-9-thread-1 state: WAITING
stacktrace:
#2021-10-2115:40Ivar RefsdalThank you @U09R86PA4 for your reply. I will reply more through tomorrow or later this evening.#2021-10-2110:59Ivar RefsdalI am also wondering if the 960 000 value is used somewhere deep down in Datomic...#2021-10-2112:28Tobias Sjögren“An entity is created the first time its id appears in the E position of a Datom.”
Is this correct?#2021-10-2113:15favilaEntities don’t have a built in notion of existence and aren’t created or destroyed#2021-10-2113:37Tobias Sjögren“For an entity id to appear in the A, V or TX positions of a datom it must first appear in the E position.” ?#2021-10-2113:52favilaNo? Entity ids are just numbers#2021-10-2113:53favilaThere’s a mechanism for “minting” new entity ids from tempids, and that advances a counter to ensure uniqueness, but that doesn’t create an entity#2021-10-2114:00Tobias Sjögren“Entity ids are just numbers”: How does the database distinguish the contents of the V position in a datom between being a primitive and an entity ID?#2021-10-2114:28favilaBy the value-type of the attribute{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-10-2114:28favilaAlso, it doesn’t have to anyway#2021-10-2212:22Tobias SjögrenSay I e.g. want to record the entity “Bill Clinton” into the database - can this new entity be “established” by adding an datom that just states an attribute value like “Bill” for attribute :person/name-first or can/should I establish the entity first without giving it any attribute value (if that’s at all possible)?#2021-10-2212:29favilaThere is no existence of an entity apart from datoms#2021-10-2212:29favilaYour second choice is impossible#2021-10-2212:29Tobias Sjögrenok!#2021-10-2212:30favilaAn entity isn’t a row in a table#2021-10-2212:30Tobias Sjögren(i’m for sure in the process of unthinking the relational model…)#2021-10-2212:31favilaOnly facts (datoms) exist#2021-10-2212:31Tobias Sjögren(putting on repeat)#2021-10-2212:32favila“Entity existence” is a domain concept now#2021-10-2212:33Tobias Sjögren“An entity cannot be put into the database without having an attribute value” Correct?#2021-10-2212:33favilaNonsense question. Only facts are put into a database#2021-10-2212:36Tobias Sjögren(question was, again, affected by the notion of a row - I guess..)#2021-10-2212:38favilaEAV table design fits though. A row in that pattern roughly corresponds to a datom#2021-10-2212:39Tobias Sjögren“Without any value, there’s no fact/datom.”#2021-10-2317:05respatializedI see Datomic's entity model as a concrete application of the idea of a https://plato.stanford.edu/entries/object/#ConsOnto from metaphysics:
> In addition to its properties, every object has as a constituent a bare particular (or ‘thin particular’ or ‘substratum’) that instantiates those properties. Bare particulars are ‘bare’ in at least this sense: unlike objects, they have no properties as parts.
>
> ... they are the subjects of properties or the items to which the properties are attached by instantiation or exemplification.#2021-10-2410:04Tobias SjögrenI like that connection. Have to read about it more. Do you recommend some book on the subject?#2021-10-2413:20respatializedI found https://www.routledge.com/Metaphysics-A-Contemporary-Introduction/Loux-Crisp/p/book/9781138639348# an extremely clear and helpful text (I read the 3rd edition, which is available inexpensively as a paperback). It doesn't focus on substance theory in particular but it's the overview that introduced me to those ideas and allowed me to draw that connection.
The Stanford Encyclopedia of Philosophy is a great resource in its own right, as well.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-10-2114:21BenjaminHi if I run an app on aws ec2 (fargate) how much ram do I need to start a peer connection? 512 is not enough#2021-10-2114:22BenjaminCaused by: java.lang.IllegalArgumentException: :db.error/not-enough-memory (datomic.objectCacheMax + datomic.memoryIndexMax) exceeds 75% of JVM RAM
#2021-10-2114:33Benjaminah it is a setting of the transactor..? what is -Xmx1g usage#2021-10-2114:35jaret@benjamin.schwerdtner You can set the objectcache on the peer and on the transactor. The error you are encountering you are seeing specifically when you launch a peer?#2021-10-2114:35Benjaminyes when my peer is connecting#2021-10-2114:36jaretand you are running datomic onprem?#2021-10-2114:36Benjaminyea#2021-10-2114:37Benjaminmemory-index-max=256m
object-cache-max=128m
either I made a mistake when setting it or it still throws even with 1gb ram#2021-10-2114:37jaretThe peer builds the memory index from the log before the call to connect returns and the objectcache takes by default 50% of the remaining heap#2021-10-2114:38jaretOk so you are setting the object-cache-max to 128, the memory-index-max is set on the transactor.#2021-10-2114:39jaretAnd the memory index will rarely rise much above the memory-index-threshold, except during data imports.#2021-10-2114:40jaretWhat is the total size of your box?#2021-10-2114:40jaretthe JVM heap?#2021-10-2114:40Benjaminnot sure I'm setting 1024 with aws#2021-10-2114:44jaretIf you're setting -Ddatomic.objectCacheMax to a high value, you'll need to make sure your heap size (`-Xmx`) is large enough for memory-index-max plus object-cache-max to fit below 75% of JVM RAM (as indicated by error message).#2021-10-2114:44jaretYou have your object cache low right now, but you can set it to a min of 32.#2021-10-2114:46jaretNow there are tradeoffs to not having a good sized object cache, and if you have some time I would encourage that you read through our docs on memory and capacity planning: https://docs.datomic.com/on-prem/overview/caching.html https://docs.datomic.com/on-prem/operation/capacity.html#peer-memory#2021-10-2114:47jaretPeers need a copy of the memory-index-max, their own object cache, and application memory. We have an example system at 4GB of ram on all transactors and you'll notice that the object cache and memory index max take up <75% of the memory:#2021-10-2114:48jaret#2021-10-2114:51Benjaminthanks I'll check#2021-10-2115:36Benjaminbeginner question how do I set a system property before loading anything? Is adding a call to System/setProperty on the top of my main file correct/ sufficient? (before the ns form)#2021-10-2117:28Daniel Jomphehttps://clojure.org/reference/deps_and_cli
An example in one deps.edn:
:jvm-opts ["-Dfile.encoding=UTF-8" "-Dconf=dev-config.edn" "-Dclojure.spec.skip-macros=true" "-Xmx500m" "-Xss512k" "-XX:+UseG1GC" "-XX:MaxGCPauseMillis=50"]
{:tag :div, :attrs {:class "message-reaction", :title "clap"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👏")} " 1")}
#2021-10-2117:19Benjaminis there a way to set dynamo table properties? Is it possible or adviceable to set the billing mode to "per request" ?#2021-10-2204:37Tobias Sjögren“In order for a keyword to appear in the A position in a datom, it must have previously appeared in the V position.” Correct? (trying to understand what a keyword really is…)#2021-10-2208:06Jakub Holý (HolyJak)Keyword is just a data type intended to be used as an identifier, typically for properties inside maps.
Attributes - your A - in datomic are identified by keywords and before you can use an attribute, you must define in, so yes, the keyword will first appear in the V position of [<attribute entity id> :db/ident <the keyword>] .{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-10-2208:20Tobias SjögrenOK. Although you probably don’t want to do it - it would be possible to use the <attribute entity id> instead of the keyword in the A position, right?#2021-10-2208:39Jakub Holý (HolyJak)I think so. If you look at the raw datoms, the E and A positions are just numbers.#2021-10-2208:41Tobias SjögrenDo you happen to know the entity ids for the built-in attributes?#2021-10-2208:48Jakub Holý (HolyJak)They are not fixed between DB instances. Look at their corresponding datoms (I do not remember the correct function for that)#2021-10-2208:54Tobias SjögrenFrom what I’ve heard it is this:
(d/create-database "datomic:")
(-> (d/connect "datomic:")
(d/db)
(d/seek-datoms :eavt 0)
(->> (group-by :tx)
(sort)
vals))#2021-10-2212:43FredrikOn my current version of Datomic Free it is the below list. But it's an implementation detail, at least it is not documented anywhere, and you shouldn't rely on it.
{:db.type/instant 25,
:db/excise 15,
:db.type/boolean 24,
:db.unique/identity 38,
:db/fn 52,
:db.type/bytes 27,
:db/index 44,
:db/unique 42,
:db.part/user 4,
:db.lang/clojure 48,
:db.excise/beforeT 17,
:db.part/db 0,
:db.bootstrap/part 53,
:db.sys/reId 9,
:db/valueType 40,
:db.type/string 23,
:db.type/keyword 21,
:db/txInstant 50,
:db.type/ref 20,
:db/noHistory 45,
:db/isComponent 43,
:db/lang 46,
:db/fulltext 51,
:db.unique/value 37,
:db/retract 2,
:db.lang/java 49,
:db.part/tx 3,
:db/cardinality 41,
:db.excise/before 18,
:db/ident 10,
:db/code 47,
:db/add 1,
:db.type/long 22,
:db.cardinality/many 36,
:db.install/valueType 12,
:db.alter/attribute 19,
:db.install/function 14,
:db.install/partition 11,
:db.install/attribute 13,
:db.type/fn 26,
:db.cardinality/one 35,
:db.excise/attrs 16,
:fressian/tag 39,
:db.sys/partiallyIndexed 8}{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-10-2212:43Tobias SjögrenInteresting!#2021-10-2212:50Tobias Sjögren@U024X3V2YN4 Would you happen to have the raw datoms where these built-in attributes are defined, as plain text?#2021-10-2212:57FredrikThe datoms are not stored individually, but in segments consisting of thousands of datoms. You can get a high-level picture of the internals here: https://tonsky.me/blog/unofficial-guide-to-datomic-internals/#2021-10-2212:59FredrikAnd here: https://docs.datomic.com/cloud/whatis/architecture.html#2021-10-2213:02FredrikMaybe worth pointing out that a single datom can be stored multiple times, once for each index containing it. Every datom is stored separately in the EAVT index and the AEVT index, for instance.#2021-10-2213:05FredrikTo answer your original question, when a keyword K appears in the A position, Datomic will try to resolve that to an entity E with an attribute :db/ident whose value is K. If it cannot do that, for instance if you never installed such an entity, you'll get an error like "Unable to resolve entity: K".#2021-10-2213:07FredrikSuch keywords K are called idents, and you can read about it here https://docs.datomic.com/on-prem/schema/identity.html#idents. Probably that whole page is worth a look.#2021-10-2213:10FredrikAt least I found that page very helpful myself when trying to understand how Datomic named entities and how it looked things up, precisely a point I was quite confused about myself#2021-10-2215:18Tobias SjögrenConcerning keywords: That’s how I understood it - I was a bit unsure after reading that “Keywords resolve to themselves” in the “Programming Clojure” book…#2021-10-2215:20Tobias SjögrenI read a lot from the documentation but still have questions..#2021-10-2215:20FredrikKeywords do evaluate to themselves.#2021-10-2215:21FredrikIf you give Clojure a keyword in, say, the REPL, it will return to you that same keyword#2021-10-2215:22Tobias SjögrenYou said a keyword used as an attribute will resolve to an entity id..#2021-10-2215:23FredrikI was talking specifically in the context of Datomic transactions. Datomic will take that keyword and try to find a matching entity{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-10-2215:24FredrikBut that has all to do with how Datomic works, not how keywords are evaluated#2021-10-2215:26FredrikIt might help to break it down into two steps. First you write some transaction data. And this really is simply data:
[{:db/id "sensor 1" :sensor/humidity 124.0}
{:db/id "sensor 1" :sensor/humidity 125.0}
{:db/id "sensor 2" :sensor/humidity 124.0}
{:db/id "sensor 2" :sensor/humidity 100.0}
{:db/id "sensor 3" :sensor/humidity 125.0}]
Then you transact this to Datomic using d/transact. Second, when Datomic receives this, it will notice there are attributes referrred to by keywords, and then try to find matching entities.#2021-10-2207:55popeyeI was going through https://docs.datomic.com/cloud/dev-local.html and
(require '[datomic.client.api :as d])
(def client (d/client {:server-type :dev-local
:system "datomic-samples"}))
(d/list-databases client {})
=> ["mbrainz-subset" "solar-system" "social-news" "movies" ...]#2021-10-2207:56popeyei thnk we need to add statement , before we printing list of databases
(d/create-database client {:db-name "friends"})#2021-10-2207:57popeyebecause without that it gave me empty vector#2021-10-2208:50Tobias SjögrenDoes anyone know if there are examples of raw datoms data somewhere online?#2021-10-2208:56popeyehttps://datomic-samples.s3.amazonaws.com/datomic-samples-2020-07-07.zip you may looking for this{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-10-2209:01Tobias SjögrenDo you know what format “db.log” files are in? I can’t read them..#2021-10-2507:54Linus EricssonThe data is stored in either fressian https://github.com/clojure/data.fressian
or transit
https://github.com/cognitect/transit-clj
The explicit storage of the various nodes is an implementation detail. the best way to get som example data is to follow the instructions and restore for instance the mbrainz-sampledata.#2021-10-2515:00Tobias Sjögrenanother area I’d like to know about…#2021-10-2215:17Tobias SjögrenIs someone here using Datomic without using Clojure (if it is even possible) ?#2021-10-2215:17FredrikThere is full support for using Datomic from Java: https://docs.datomic.com/on-prem/reference/languages.html?search=%20#2021-10-2215:19Tobias SjögrenEither Clojure or Java and nothing else?#2021-10-2215:19FredrikThere are third-party libraries for some other languages#2021-10-2308:20Ben SlessOnly JVM langs from what I've seen#2021-10-2507:55Linus EricssonYou can use any JVM-language with the Java wrapper. The datomic implementation in the peer is dependent on clojure since it is implemented in (JVM-)clojure.#2021-10-2218:24Ivar RefsdalIt seems that https://support.cognitect.com is down#2021-10-2218:31jaretIvar, are you accessing this site from a mobile device?#2021-10-2218:31jaretDoes it redirect you to https://cognitect.zendesk.com#2021-10-2218:32jaretTrying to determine what the outage is.#2021-10-2218:38jaret@UGJE0MM0W I am seeing everything working from my end can you share specifically what you are seeing?#2021-10-2218:58Robert A. RandolphIt appears that there is or was an issue on Zendesk's end with authentication. We were able to log in after clearing all zendesk cookies.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-10-2507:19Ivar RefsdalHere is what I'm seeing#2021-10-2513:38jaretHi @UGJE0MM0W I believe we have resolved that just now. I would appreciate independent confirmation so if you have a moment, please let me know.#2021-10-2513:47Ivar RefsdalYes, I'm logged in now when I refreshed :thumbsup:#2021-10-2513:47jaretOh great! Thanks Ivar!#2021-10-2513:52Ivar RefsdalNo problem 🙂#2021-10-2510:01djanusWith Datomic On-Prem transactor deployed on AWS in a high-availability setup (one active + one standby transactor), what is the simplest way of identifying which instance currently is active and which is standby?
I’ve found the instance IP in the transactor logs emitted at startup time, but that’s unwieldy.#2021-10-2611:32jaretHi @UFR3C1JBU the transactor logs will contain a log line lifecycle event with the status standby#2021-10-2611:32jareti.e.
2017-10-19 10:13:26.532 INFO default datomic.lifecycle-ext - {:event :transactor/standby, :rev 1238602, :missed 1, :timestamp 1508407998353, :pid 2604, :tid 20}#2021-10-2512:26Ivar RefsdalI see that datomic transactor on-prem bundles org.postgresql/postgresql "9.3-1102-jdbc41", released on Jul 18, 2014.
Is that also the recommended PostgreSQL driver for peers?
Why such an old release?
At my company we are connecting to PostgreSQL 11#2021-10-2512:40donavanI’m having to answer questions about when we apply patches for our infra. When are the EC2 instances that back QGs replaced, is it only during Ions upgrades? I gather it’s not when we deploy. How often is the AMI updated? Have I missed this info in the docs?#2021-10-2611:28jaretHi @U0VP19K6K, every time you deploy you will cycle the instances and install your ion code on the instances. However, the instances AMI is tied to the CFT version they are on. So it depends on what you define as applying patches. Does that answer your question?#2021-10-2611:29donavanIt does thanks Jaret 🙂#2021-10-2514:11xcenoI just tried to upgrade an existing datomic production system from 715-8973 to 884-9095 https://docs.datomic.com/cloud/changes.html#884-9095 and got this error while upgrading the storage:
> UPDATE_FAILED The following resource(s) failed to update: [EnsureAdminPolicyLogGroup].
My stack has a root stack with the name of my system and two childs: compute & storage. I just found this ticket where the solution was to split the stack: https://forum.datomic.com/t/failed-production-storage-update-to-884-9095/1947
So, should I also split our stack and try upgrading again?#2021-10-2611:24jarethi @U012ADU90SW yes, you will want to perform a split stack operation to get off the Marketplace template. From there on in you will be able to update the individual stacks depending on what has been released.#2021-10-2611:24jarethttps://docs.datomic.com/cloud/operation/split-stacks.html#2021-10-2611:24jaretIf you encounter any issues please let me know directly or e-mail us at support, <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>.#2021-10-2611:53xcenoThank you jaret! I'll give it a go and report back later#2021-10-2515:20Tobias SjögrenTrying to learn more about Datomic indexes.. I seems “covering indexes” means that the indexes are actually full copies of the datoms but sorted in different E-A-V ways. It seems the four indexes are stored in Amazon S3 - but are they copied to each peer also?
I’d like to learn more about both indexes and caching - in general, and Datomic specifically. Anyone know what are the best sources for such information?#2021-10-2516:37schmeeit depends on whether you’re using Cloud or On-Prem, but either way the docs is a good place to start:
• https://docs.datomic.com/cloud/whatis/architecture.html
• https://docs.datomic.com/on-prem/overview/architecture.html#2021-10-2516:38Tobias SjögrenOther than the docs I should have said..#2021-10-2516:43favilahttps://tonsky.me/blog/unofficial-guide-to-datomic-internals/#2021-10-2516:44favilaPeers pull segments (blocks of sorted datoms) down as they need them from storage or one of the caching laters in front of storage. The unindexed portion of change is kept in memory on all peers#2021-10-2516:45favilaa reindex incorporates the unindexed portion into a new full index, updates the pointer to the root of the segment tree, and the cycle begins again.#2021-10-2608:59Tobias SjögrenAh, that great article again of course!#2021-10-2706:51Tobias SjögrenFrom what I understand all indexes are in both main storage and local storage/cache. If I’m correct about that I wonder what the reason behind this is - why wouldn’t it be enough to create the indexes only locally?#2021-10-2712:27favilaThe cache is only populated by need#2021-10-2712:28favilaAnd it’s assumed to be ephemeral#2021-10-2712:28favilaAnd potentially incomplete#2021-10-2719:42Tobias SjögrenOnly the ”original data” (the ”log”?) is local and the four indexes are transferred to peers when queries need them?#2021-10-2719:54favilathe log tail (that hasn’t been incorporated into the stored indexes) is kept in memory, and the indexes of it are kept in memory. The rest is fetched from storage as needed and merged with these to produce a complete view.#2021-10-2808:54Tobias SjögrenI wonder what might be the disadvantage of having the four indexes fully represented on each peer..#2021-10-2816:12favilaWhat do you mean by “fully represented”?#2021-10-2518:00César AugustoHello!
I am trying to learn more about the transactions functions, but I am having a little difficult on understanding how to install a transaction function.
Is it possible to have the transaction function implemented by two or more functions instead of having all the code inside code keyword? For example:
(defn other-func-2 [] <all-code-here>)
(defn other-func-1 [] (let [foo (other-func2)] <all-code-here> ))
#db/fn {:lang :clojure
:params [db offer]
:code (other-func-1)}
Instead of
#db/fn {:lang :clojure
:params [db entity]
:code <all-code-here>}
#2021-10-2519:26favilaThe problem is the environment isn’t shared among all peers. You can either install the code into the database itself, in which case it can only reference things you know all peers have in their environment (that includes the database itself--you can use d/invoke to invoke other db functions).#2021-10-2519:26favilaor you can ignore all this installation stuff and just put the functions into the transactor’s classpath. https://docs.datomic.com/on-prem/reference/database-functions.html#classpath-functions#2021-10-2519:27favilaif you use d/with, these need to be in the peer’s classpath too#2021-10-2819:07César AugustoHey Favila, thank you for the answer!!
1 - I think I didn't understand how to do that... is there any example of install code into the database itself? Because I thought I was doing it when I install the db/fn.
2 - About the transactor's classpath: Do I need to generate my code as a lib in order to add it to the classpath?
3 - Do you know any example using d/with ? I didn't understand how it is related to the other options#2021-10-2819:22favilaThat link documents two different things: 1. putting executable code as data into the db; 2. calling a “normal” function from transaction data.#2021-10-2819:23favilaTalking about (2) first. On the transactor, you make functions available by including a jar with that code in it on startup
export DATOMIC_EXT_CLASSPATH=mylibs/mylib.jar
bin/transactor my-config.properties
#2021-10-2819:25favilaThen you can use it anywhere in the transactor process by using the symbol name, e.g. attribute predicates, entity predicates, as a tx fn, or inside a query running in any of those.#2021-10-2819:26favilain transaction data, “invoking” one of these looks like this [[my.namespace/myfunction arg1 arg2 argV…]]#2021-10-2819:28favilad/with does everything a transactor does (takes a db and tx-data and returns a new db), but locally and doesn’t write to storage. But any of these function-symbol references will be resolved as it runs, so that DATOMIC_EXT_CLASSPATH jar has to be in the peer’s classpath too or d/with won’t work in those cases.#2021-10-2819:32favila(1) is installing a function-code object (as a string that’s compiled+cached on-demand) into the db itself as a value on an entity, and you “invoke” that code in transaction data using its keyword ident, e.g. [[:my/tx-fn arg1 arg2]] . Peers can get this code through reading the database itself, but you have to use the special interfaces specific to that--the normal language runtime (e.g. require) doesn’t know about it.#2021-10-2819:33favilahopefully that answers all your questions?#2021-10-2820:42César AugustoThank you again @U09R86PA4, yeah it looks like it answered all questions, just to make sure I got it:
1. for add code as data into db: I can create it using the #db/fn. It only accepts clojure core symbols/function and datomic.api symbols/functions (i.e. d/q function). custom function doesn't work using code as data. It executes like [[:my-fn arg1 arg2 ...]] .
2. for calling a custom function: I need to have this function in a library and add it to the classpath of the datomic using the DATOMIC_EXT_CLASSPATH . It executes like [[lib.namespace/lib-function arg1 arg2...]]
The question that raised now for me is about "Peers can get this code through reading the database itself, but you have to use the special interfaces specific to that--the normal language runtime (e.g. `require`) doesn’t know about it"
1. I don't know in which case I would like to peers to get the code
Your answer helped me a lot - thank you again - I think there were some concepts I was missing, for example, that the function is sent to transactor to be executed and the transactor doesn't have the same libs as the peers have.#2021-10-2519:21stijnIf I send a list of transactions to 2 different databases (even 2 different transactors), will the resulting t be the same of the different databases? Or is there no such guarantee?#2021-10-2519:23favilaNo guarantees. T advancement is an implementation detail. In practice the only guarantee is that it won’t go backwards, and it will go up at least 1 for each successful transaction.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-10-2519:21stijn(this question is about on-prem)#2021-10-2520:52uwoI'm looking to find the state of an entity immediately prior to a transaction. My first inclination is to use `(dec (d/tx->t db tx))` (dec (d/tx->t tx)) with an as-of query, however I know that t values do not always increase by 1. My impression is that this will still work, but is there better approach?#2021-10-2520:55Lennart BuitYou get a :db-before value in your transaction result. Thats a database immediately prior to the transaction just processed#2021-10-2520:58Lennart Buit(Similarly, you get a :db-after value, which is the db value immediately after the transaction just processed 😉 )#2021-10-2520:58uwoThanks for the response, but in this circumstance I'm querying for the tx-id; the transaction occurred in the past#2021-10-2520:58uwoOtherwise I would absolutely take that route!#2021-10-2520:59Lennart Buitah yeah, that was the other possibility I wasn’t hoping for haha{:tag :div, :attrs {:class "message-reaction", :title "joy"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😂")} " 1")}
#2021-10-2521:01favilathis will work in that the state of the database you read will be the one immediately prior to the T{:tag :div, :attrs {:class "message-reaction", :title "pray"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙏")} " 2")}
#2021-10-2521:01favilahowever (dec some-t) isn’t guaranteed to be a t#2021-10-2521:02favilai.e. you won’t necessarily be able to (d/t->tx t) and get an entity with :db/txInstant asserted on it#2021-10-2521:02uwoThanks @U09R86PA4! I just found an example in day-of-datomic, and it looks like I don't even need to use the t-time. I can just dec the tx-id: https://github.com/Datomic/day-of-datomic/blob/20c02d26fd2a12481903dd5347589456c74f8eeb/tutorial/query_tour.clj#L91#2021-10-2521:02favilayou can dec either one. T and TX differ only by some constant high bits#2021-10-2521:03uwoexcellent!#2021-10-2521:03favilathat’s why d/t->tx and d/tx->t exist and don’t need a db{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-10-2521:03uwoWhoops. that's embarrassing. I should know better to use the wrong signature#2021-10-2618:21lostineverlandLooking for best practice for using datomic with clojurescript (targeting nodejs).#2021-10-2706:54Tobias SjögrenAnyone knows if offline support including some write capability has been talked about? One idea could be to support creation of new entities offline for syncing later when online..#2021-10-2707:03Tobias SjögrenHas anyone examined or thought about a possible connection between Datomic and “Local-First Software” ? (https://martin.kleppmann.com/papers/local-first.pdf)#2021-10-2715:11favilahttps://replikativ.io/ is in this space{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-10-2719:54César OleaI read this paper some time ago and couldn't find it later. And now all of a sudden here it is. Thank you!#2021-10-2711:19Ivan FedorovI’ve runned into Could not find artifact com.datomic:ion:jar:0.9.50 in central ()
I see it was discussed previously, but I don’t understand the investigation scenario so far.
I’ve checked the ~/.m2/settings.xml and the ~/.clojure/deps.edn and it all looks as in the instruction.
Project’s deps.edn also works for other team members.
How can I diagnose if I’m connecting to the right datomic cloud with the right credentials?
__
upd: I’ve found the tip about the IAM user and the S3FullAccess policy, but I’m too bad at AWS language.#2021-10-2712:49xceno> I’ve found the tip about the IAM user and the S3FullAccess policy, but I’m too bad at AWS language
What that means is, that you also need to setup the AWS CLI V1 as described in the docs, with your AWS keys that you get in the AWS console in the top right corner under "my security credentials".
Your AWS user also needs Read-Access to S3#2021-10-2712:50Ivan FedorovYeah, got it, on it, thanks!#2021-10-2713:50Ivan FedorovGot to
Error building classpath. Could not transfer artifact software.amazon.ion:ion-java:jar:1.0.2 from/to central (): status code: 416, reason phrase: Range Not Satisfiable (416)
#2021-10-2715:20xcenoYeah I encountered that a lot too. Try deleting your maven cache and try again{:tag :div, :attrs {:class "message-reaction", :title "heart"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("❤️")} " 1")}
#2021-10-2716:18Ivan Fedorovthis helped, thanks!#2021-10-2711:28conanHas anyone got Metabase connected to Datomic? I'm unable to authenticate using Presto, seems to be an SSL issue#2021-10-2909:49Linus EricssonI have not tried, but I there has been quite a lot of changes (for the better) in both JDK 8 and JDK 11, what JDK(s) are you using?#2021-11-0318:23Jeff Evans(engineer at Metabase here, who has worked on the Presto driver specifically)
do you have a full stack trace of the error?#2021-10-2715:37favila(d/q '[:find ?x
:where
[(ground :foo) ?x]
(not [_ :db/ident :does-not-exist])]
db))
=> #{} ;; expect #{[:foo]}
What’s going on here?#2021-10-2715:39favilabefore anyone asks, this ident definitely doesn’t exist:#2021-10-2715:39favila(d/q '[:find ?x
:where
[(ground :foo) ?x]
[_ :db/ident :does-not-exist]]
db)
=> #{}#2021-10-2715:41favilaContext: I’m trying to write a query which will work (i.e. not error and produce a default value) before an attribute is installed.#2021-10-2715:47favilaworkaround:#2021-10-2715:47favila(d/q '[:find ?x
:where
[(ground :foo) ?x]
[(ground :does-not-exist) ?v]
(not-join [?v]
[_ :db/ident ?v])]
db)
=> #{[:foo]}{:tag :div, :attrs {:class "message-reaction", :title "clap"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👏")} " 1")}
#2021-10-2716:09FredrikWill it still work with not instead of`not-join?`#2021-10-2716:10FredrikSince there are no logic variables in the not-clause#2021-10-2716:16favilayes
(d/q '[:find ?x
:where
[(ground :foo) ?x]
[(ground :does-not-exist) ?v]
(not [_ :db/ident ?v])]
db)
=> #{[:foo]}#2021-10-2716:17favilaI’ve had problems with _ unifying with itself and with the non-explicit -join doing the wrong thing or throwing an error--so it’s just a paranoid tick at this point#2021-10-2716:18FredrikWhat do you mean by "_ unifying with itself"?#2021-10-2716:25favilaI destructured something like [(re-matches #"a (b) (c)" "a b c") [_ _ ?c]] and it would never produce a ?c because the two _ were unequal. the workaround was [?_ignore1 ?___ignore2 ?c]#2021-10-2716:25favilai.e. it appeared to be unifying _#2021-10-2716:25favilaI can’t do it now though, maybe it’s been fixed, or the conditions were more complicated#2021-10-2716:46FredrikStill strange that your original query doesn't work as expected. Perhaps there should be at least one bound variable in a not?#2021-10-2719:37Mike RichardsIs there a “correct” way to shut down Datomic (client peer api, on prem) at the end of a one-off task, say a maintenance script? I’ve tried a number of variations on this sort of code, but frequently (though not always) end up with exceptions:
(d/release connection) ; seems to happen wither I call release or not
(d/shutdown false) ; true/false does not have an impact here either
(shutdown-agents)
The exceptions are always “AMQ219019: Session is closed”, e.g.
[datomic.slf4j] (clojure-agent-send-off-pool-4) {:message "Caught exception", :pid 1, :tid 38}
org.apache.activemq.artemis.api.core.ActiveMQObjectClosedException: AMQ219019: Session is closed#2021-10-2816:04Tobias SjögrenFrom what I understand a query to the EAVT index to get every attribute and value associated with a specific entity is much faster than a query to the historic order log would be if the query - why is that?
(I know the EAVT index contains datoms sorted by the “E” position)#2021-10-2816:13favilaEAVT is indexed by E; the log is ordered by TX.#2021-10-2816:14favilaactually what do you mean by the “historic order log”, the history index or the tx-log?#2021-10-2816:15Tobias SjögrenShould that be understood as datoms sorted by E compared of by TX with no other difference?#2021-10-2816:15Tobias SjögrenThe doc says “transaction data in historic order”, that’s why I used that wording..#2021-10-2816:17favilathat sounds like the tx-log#2021-10-2816:17Tobias Sjögrenyes#2021-10-2816:17favilait’s just “every transaction ever, in the order it was written”#2021-10-2816:18favilaso if you want to know about datoms related to a specific E, you would have to inspect all transactions for that.#2021-10-2816:18Tobias Sjögreninstead of having them grouped together as in the EAVT index?#2021-10-2816:19favilayes#2021-10-2816:21Tobias SjögrenI need a better understanding of why datoms that sits next to each other (as in the EAVT index) are faster to query than ones that are “spread out” (as in the TX log)..#2021-10-2816:31favilaEAVT the index is a sorted B-tree like structure, with high branching factor and three levels, so if you know an E and that corresponds to the sort order, finding the segments EAVT data requires reading at least 3 segments (6 if you want history), and the parent segments are more likely to be in memory anyway since they are shared by many reads; finding the E in the tx-log must examine all data ever written--it’s as good as being unsorted.#2021-10-2817:19Tobias SjögrenWould you say putting an entity with all its attributes/key-values in a JSON object is somehow similar to the EAVT B-tree ?#2021-10-2817:28faviladepends on the details. json objects are typically implemented as hashmaps, so probably not. At small scale it wouldn’t matter, but then nothing matters at a small enough scale#2021-10-2909:53Linus EricssonPlease look into this introduction to Clojure PersistentVector to understand what optimizations it offers. This structure is (more or less) used in Datomic. https://hypirion.com/musings/understanding-persistent-vector-pt-1{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-10-2909:55Linus EricssonHashMaps are in general not as fast as persistentVector and other B-tree like structures (if you know what you look for, that is), because they have to look for data stored at random positions in memory. CPU:s really like when data is stored in sequential memory blocks, and in the same 4 kb RAM page etc etc.#2021-10-2910:00Tobias SjögrenWhat about data on disk, in contrast to in memory?
(I need to learn more about this stuff..)#2021-10-2910:03Linus EricssonDatomic is a typical SSD-drive database. And there paging is similar to how it works in the cpu/cachepipeline. But a spinning disk also has the concept of pages (with memory mapped paging to disk heavily optimized in the OS kernel).
Datomic dont use this directly though, but benefints through databases that do (DynamoDB for instance).#2021-10-3005:03Tobias SjögrenSo far I’ve just had a quick look at the link about PersistentVector - it seems it solves the problem of combining immutability and non-redundancy, there’s immutability but no copying - is that correct?#2021-10-3009:36Tobias Sjögren(the author says ”persistance” to mean ”immutability”, I guess)#2021-10-3100:29FredrikI'll try to untangle the words "persistence" and "immutability". Someone please correct me if I'm wrong.#2021-10-3100:29Fredrik"Persistence" means that when you do any operation on a data structure, you can still access the previous version. There's a bunch of techniques to implement this. For example, you could imagine that (assoc A 0 :foo) first made a copy A', then modified A' in memory and returned that. Or that each element of A keep track of version numbers, such that assoc creates an entry v2 for the first element, and you coordinate thing such that A' points to v2 and A points to v1. Persistence alone doesn't imply immutability or sharing of structure; you can have persistence without immutability.#2021-10-3100:30Fredrik"Immutability" refers to the fact that a data structure, once created, cannot be modified. The only way to modify it would be to copy it first. Taken together, persistence and immutability give you some very strong guarantees about your data structures. Concurrent access becomes much simpler, since any reference to A will always be valid and always return the same value.#2021-10-3100:34FredrikIn Clojure, it's not true that there's no copying involved. Otherwise things wouldn't be immutable. But there is "as little" copying as possible, and due to tree-implementation of vectors that "little" really is quite small, copying only the paths to the modified parts.#2021-10-3111:35Tobias SjögrenFrom the ”Programming Clojure” book (2018):
”When all data is immutable, “update” translates into “create a copy of the original data, plus my changes.””
”…persistent means that the data structures preserve old copies of themselves by efficiently sharing structure between older and newer versions.”#2021-10-3111:38Tobias SjögrenI’m used to see the word ”persistance” in the context of ”persistent storage” and ”persistance layer” which is something else..#2021-10-3101:01YiHi, everyone! I am trying to add datadog JVM agent to Datomic transactor (on prem) java process but it doesn't seem the -javaagent JVM setting is being picked up. I noticed this https://ask.datomic.com/index.php/458/can-you-start-the-datomic-jvm-with-a-custom-jvm-agent while investigating. Is adding a JVM agent doable with Datomic on prem? We are at 0.9.6045.#2021-10-3104:04favilaTake a look at the startup script at bin/transactor. Either you can find an env var it reads and use it to set Java opts, or you can replicate what it does yourself and add more args (it’s not very complex).#2021-10-3104:13YiThanks @U09R86PA4, yes, I added a javaagent . Here is the process
00:02:33 java -server -cp resources:datomic-transactor-pro-0.9.6045.jar:lib/*:samples/clj:bin: -Xmx4g -Xms4g -XX:+UseG1G
C -XX:MaxGCPauseMillis=50 -XX:+ExitOnOutOfMemoryError -XX:HeapDumpPath=/opt/datomic-pro-0.9.6045 -Ddatomic.peerConnectionTTLMsec=60000 -javaagent:./dd-java-age
nt.jar clojure.main --main datomic.launcher /opt/datomic-pro-0.9.6045/config/transactor.properties
But I don't see any startup log which should appear if it is working fine https://docs.datadoghq.com/tracing/troubleshooting/tracer_startup_logs
I have ruled out a few possibilities
• the log is muted
• the dd-java-agent.jar is corrupted
• datadog-agent APM agent is not running fine
After seeing the mentioned post, I am wondering if it is just custom JVM agent is disabled for transactor (on prem or cloud)#2021-10-3104:45favilaI believe a javaagent param may need to go before jar or cp params to work? I notice dd also has some other configuration you can add to make it noisier to make sure it’s loading and working https://docs.datadoghq.com/tracing/setup_overview/setup/java/?tab=containers#2021-10-3104:47favilaI’m not aware of any way a Java program can disable or circumvent a static agent. I’m not sure how that link gave you that impression#2021-10-3105:03Yiyou are absolutely correct that javaagent param needs to go before jar or cp params. @U09R86PA4! Thanks a lot~#2021-11-0115:58Sebastian AllardHey,
We got some trouble with datomic-cloud. We deleted a Dynamodb table and then restored it and now it is not possible to write to it (we can still read data). We get the following exception when we try to create a transaction:
"exception": {
"type": "app-name.exception/standard",
"app-name": {
"message": "Unable to persist transaction.",
"status": 500,
"extra-info": {
"datomic.client-spi/context-id": "context-id...",
"cognitect.anomalies/category": "cognitect.anomalies/fault",
"datomic.client-spi/exception": "java.lang.NullPointerException",
"datomic.client-spi/root-exception": "java.lang.NullPointerException",
"cognitect.anomalies/message": "java.lang.NullPointerException with empty message",
"dbs": [
{
"database-id": "db-id-here...",
"t": 18381,
"next-t": 18382,
"history": false
}
],
"ex-type": "transaction"
}
}
},
"db": {
"name": null,
"t": 18381,
"id": "db-id-here..."
}
Is there anyone familiar with using datomic cloud and AWS? Grateful for any help!{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 2")}
#2021-11-0116:18jaretHi @allard.valtech what is the nature of this DB? Is this a test DB or a production system? Deleting underlying storage and restoring from a restore point is generally not supported in Datomic Cloud.#2021-11-0116:23jaretDid you delete any other durable storage resources? listed here in our docs? https://docs.datomic.com/cloud/operation/deleting.html#deleting-storage#2021-11-0208:11Sebastian AllardHey @jaret, thanks for your reply!
It was a database used in our dev environment. I find it hard to believe that it is not possible to restore backups in datomic cloud? No, we only deleted a backed-up table in DynamoDB and then restored it. The purpose was to import a production table to recreate a bug from the production environment.#2021-11-0208:24Sebastian AllardThere is a chance the previous developer created Datomic backups - but backup/restore has not been added for Datomic cloud yet? https://forum.datomic.com/t/cloud-backups-recovery/370/2 It is kind of crazy that a database does not support backups/restores#2021-11-0212:00stuarthallowayHi @allard.valtech. Thanks for using Datomic! A few points:
1. You can use https://docs.datomic.com/cloud/dev-local.html#import-cloud to make a local copy of a production database for dev purposes.
2. The resources created with a Datomic cloud system (e.g. DDB, EFS, and S3) should be managed only through Datomic tools. AWS tools are not aware of Datomic semantics are cannot preserve system invariants between different resources. We will look at making the documentation more clear on this point.
3. We are working on new ideas for data mobility and would love to hear about your specific use case if it is not covered by import-cloud.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-11-0213:37Sebastian AllardThanks for the clarification @U072WS7PE 🙂
I think import-cloud should cover our use case. Being new to Datomic,
it seems like we've made some incorrect assumptions. Do you have any
pointers on what action we could take in order to be able to write to
the restored table? The restored table has the same ARN, so it seems
like it should work#2021-11-0213:08Dean HayesHi Everyone,
I wonder if anyone might be able to help me with something?
I am trying to use an entity spec with datomic to provide constraint checking when entities are transacted/retracted from the database using :db/ensure. This works fine - if my predicate fails the entities are not transacted into Datomic. But in the case where the predicate passes the :db/ensure key/value also appears to be stored in the database - it's visible when I pull the entity out again. This isn't what I expected to happen according to the docs:
https://docs.datomic.com/cloud/schema/schema-reference.html#entity-specs
":db/ensure is a virtual attribute. It is not added in the database; instead it triggers checks based on the named entity."
Have I done something wrong causing :db/ensure to be added to the database? Or have I misunderstood the docs?
I've got a simple test case showing this behaviour if it helps see what I mean.
If anyone can point me in the right direction that would be much appreciated, cheers!#2021-11-0213:28FredrikI get the same behaviour in on-prem. Maybe the documentation is only valid for Datomic Cloud?#2021-11-0213:34Dean HayesThanks Fredrik 🙂
Good thinking, but looking at the on-prem docs it seems the behaviour should be the same (that is - it sounds like the :db/ensure key shouldn't end up in the db), so not sure that's the reason:
https://docs.datomic.com/on-prem/schema/schema.html#entity-specs#2021-11-0214:54FredrikI'm just guessing, but maybe being "virtual" has to do with the way the value of the attribute is resolved: The fact such an attribute exists is in fact recorded to the database, but the lookup is done at runtime using the classpath.#2021-11-0313:25Dean HayesThanks Fredrik. I'll keep looking and let you know if I work out what's up/stop it behaving like this/hear why it is like it is :)#2021-11-0215:49Daniel JompheHi Cognitect, since Datomic Cloud 2021/09/29 936-9118, is this still true for Ion apps?
> There is no redirection when running in Datomic Cloud, so it is ok to leave calls to initialize-redirect in your production code.
I'm investigating why our logs cast no more appear in CloudWatch.
We do init our app, even in prod deployments, with (cast/initialize-redirect :tap) and if I add-tap using a remote REPL connection to the Ion app, I see that the logs cast are indeed tapped. Due to the documentation I'd expect them to not be tapped, but instead be sent to CloudWatch. That's what happened until recently and I wonder if I could correlate to our update to 936-9118.#2021-11-0215:51Daniel JompheOr the correlation could be to ion 1.0.57, even though the changelog isn't exactly about my issue.#2021-11-0218:34Daniel JompheI tried reverting to v 0.9.50 of ion.
I tried not running (cast/initialize-redirect :tap) in deployed ion code.
Logs are not back up yet.
Continuing to try to find a proper cause.#2021-11-0219:02Daniel JompheAs of now I'm certain the new version of Datomic is not in cause.
False alert to Cognitect. 🙂#2021-11-0219:17Daniel JompheI found the cause. Sometime ago I started loading this file upon booting, even when in deployment in Datomic Cloud Ion. I shouldn't have.
Cognitect, please note that it seems to prove that your documentation is out of date.
That is, calling initialize-redirect in production will have an effect, and block logs from appearing in CloudWatch.
The dark background screenshot shows that I needed to comment out these forms to bring back our logging capabilities. So I'll need to load this only in local dev, like I did before.#2021-11-0219:24Daniel JompheIt was ok to bring back the ns with its require during boot, but not both expressions below.#2021-11-0217:42jdkealyIs it possible to restore a postgres DB dump between datomic instances ?#2021-11-0218:45jdkealyI'm having trouble connecting to local postgres.
I have a docker-compose with hosts postgres and datomicdb
My connection string is "datomic:sql://?jdbc:"
My transactor has
protocol=sql
host=postgres
port=4334
license-key=LICENSE_KEY
sql-url=jdbc:
sql-user=datomic
sql-password=datomic
sql-driver-class=org.postgresql.Driver
memory-index-threshold=32m
memory-index-max=256m
object-cache-max=128m
When i try to connect from clojure i get,
clojure | Error communicating with HOST postgres on PORT 4334
So, I try swapping out the postgres host for datomicdb and i get
clojure | :db.error/read-transactor-location-failed Could not read transactor location from storage#2021-11-0220:28jdkealyfigured it out ...
The issue was i had put my host as postgres
The host was supposed to be datomicdb#2021-11-0219:00Ivan FedorovHey, I’m writing a malli schema translation to datomic schema.
I’m puzzled with how to translate [:vector string?] .
I think it should be a homogeneous tuple like:
{:db/ident :order/comments
:db/valueType :db.type/tuple
:db/cardinality :db.cardinality/one
:db/tupleType :db.type/string}
But my colleague suggests it may be just a cardinality/many string type prop.#2021-11-0219:02Joe LaneIs order important to your use-case?#2021-11-0219:03Joe LaneTuple's only support up to 8 values, so I think your colleague might be on to something.{:tag :div, :attrs {:class "message-reaction", :title "heart"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("❤️")} " 1")}
#2021-11-0219:09Ivan Fedorovthanks! I missed it#2021-11-0219:09Ivan Fedorovorder is important though#2021-11-0219:11Joe LaneOk, if order is important, then you need to represent an ordinal somehow.
:order/comments #{{:ordinal 1 :comment/text ":100:"} {:ordinal 2 :comment/text "Meh."}}#2021-11-0219:12Joe LaneMay also be worth making the comment entity a component attribute of the order.#2021-11-0219:18Ivan FedorovHmm, yes, thanks for the note!
I must specify that I’m writing a transformer function, to translate an arbitrary malli schema to a datomic schema, so I’m talking about potential input.
I guess I should add warnings data to the output then.#2021-11-0219:17Daniel JompheI found the cause. Sometime ago I started loading this file upon booting, even when in deployment in Datomic Cloud Ion. I shouldn't have.
Cognitect, please note that it seems to prove that your documentation is out of date.
That is, calling initialize-redirect in production will have an effect, and block logs from appearing in CloudWatch.
The dark background screenshot shows that I needed to comment out these forms to bring back our logging capabilities. So I'll need to load this only in local dev, like I did before.#2021-11-0308:35Sebastian AllardHey,
Does anyone have any pointers on how to enable datomic to write to a restored dynamo-db table? It is still possible to read from the table, but writing to it fails (https://clojurians.slack.com/archives/C03RZMDSH/p1635782288164300). The only drift we can see is the TxGroupRole and BastionAutoScalingGroup in the compute-stack.#2021-11-0309:03favilaI think you are firmly in “open a support ticket” territory. You will probably have to overwrite some dynamo db keys manually{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-11-0400:29jdkealyDoes bin/datomic ensure-transactor my-transactor.properties my-transactor.properties not work for sql storage?
When I copy the config/sample/sql-transactor-template.properties and run bin/datomic ensure-transactor` i get the error:
bin/datomic ensure-transactor config/samples/sql-transactor-template.properties config/samples/sql-transactor-template.properties
java.lang.IllegalArgumentException: No method in multimethod 'ensure-transactor*' for dispatch value: :sql
#2021-11-0401:46favilaNo, sql varies a lot. See the sample ddl #2021-11-0401:47favilaThere are examples for Postgres and MySQL#2021-11-0408:44Tobias SjögrenWhat are entity id ranges for the db, tx and user partitions?#2021-11-0411:57favilaUse the d/entid-at function in the peer api to construct some and find out. I’ve been told cloud entity ids may have a different structure#2021-11-0423:03kennyShould trailing _s in where clauses always be elided? e.g., [:find ?e :where [?e :a _]] is always preferred over [:find ?e :where [?e :a]].#2021-11-0507:13Lennart BuitI personally like the underscore. To me it shows that the binding was not forgotten, but it was explicitly ignored.#2021-11-0602:49Drew VerleeNot really. I would use it when you have to e.g [_ ...]
Keep in mind, there are 5 elements in that tuple, you are typically ignoring 2.#2021-11-0509:58ivanaHello! In current project I have .edn file, which holds datomic transactor-level functions, uploaded as :db/fn . There can use clojure.core functions, clojure.string/... etc. But I also have my own .clj file with some custom util functions. Can I load it somehow and use inside datomic transactions?#2021-11-0510:30Joe LaneHi @U0A6H3MFT, check out https://docs.datomic.com/on-prem/reference/database-functions.html#using-transaction-functions!#2021-11-0602:46Drew VerleeWould this be how you achieve atomic commits?#2021-11-0620:45ivana@U0DJ4T5U1 I'm not totally sure, but looks like yes, and this is one of the goals of all the idea#2021-11-0617:44BenjaminIs there a way to have optional arguments in queries? Like something that is either wildcard or some value#2021-11-0617:46thumbnailDepending on your needs you may be able to add or remove clauses conditionally. It's data after all :)#2021-11-0617:48Benjaminyea true#2021-11-0617:55vlaaadhttps://gitlab.com/arbetsformedlingen/taxonomy-dev/backend/jobtech-taxonomy-api/-/blob/develop/src/clj/jobtech_taxonomy_api/db/graphql/concept.clj#L35{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2021-11-0617:56vlaaadHere is how we do it, basically using map queries since those are easy to modify programmatically#2021-11-0619:12potetm@U02CV2P4J6S Like this? https://docs.datomic.com/cloud/query/query-data-reference.html#collection-binding#2021-11-0619:13potetmIt depends on what you’re trying to do, but that’s one way to pass a collection of variable length and get an ord result.#2021-11-0703:34Dustin Getzhttps://www.reddit.com/r/Clojure/comments/qmgo23/comment/hjaoh5b/#2021-11-0714:34potetmYou could just put the whole query behind a conditional.#2021-11-0714:34potetmFor that one.#2021-11-0714:35potetmIf the :find or :where clauses are more involved, you could use the map query syntax and share elements for each query.#2021-11-0714:41potetmlol that’s what the OP was asking to avoid#2021-11-0714:41potetm:derp:#2021-11-0714:55potetmOkay, my response to that 😄: https://www.reddit.com/r/Clojure/comments/qmgo23/datalogdatomic_is_there_a_better_way_to_write/hjogogu/#2021-11-0914:56Gavin DaviesHello!
I have a poorly performing Clojure+Datomic application I have been tasked with tuning. Unfortunately, I am not a Clojure developer and have no Datomic experience at all! I am trying to configure Cloudwatch, but I cannot see anything appearing in my CloudWatch metrics at all.
1. Datomic 0.9.6045 in Docker clojure:openjdk-8-tools-deps-alpine on ECS using OpenJDK 1.8.0_212
2. Following this guide https://docs.datomic.com/on-prem/overview/aws.html#other-storages
3. Running bin/datomic ensure-transactor config/transactor.properties config/transactor.properties gives me java.lang.IllegalArgumentException: No method in multimethod 'ensure-transactor*' for dispatch value: :sql
4. The only search result I can find for this is here https://clojurians-log.clojureverse.org/datomic/2018-03-20 but I don’t see an explanation or a resolution
5. I tried pushing a new image and it does start with my config file, I see no errors in the logs, so I assume I am running ensure-transactor incorrectly or I am missing something?
My config/transactor.properties is:
protocol=sql
host=<REDACTED>
port=<REDACTED>
metrics-callback=<REDACTED>.datomic-logging.core/report-metrics
license-key=<REDACTED>
alt-host=0.0.0.0
sql-url=jdbc:postgresql://<REDACTED>
sql-user=<REDACTED>
sql-password=<REDACTED>
sql-driver-class=org.postgresql.Driver
memory-index-threshold=32m
memory-index-max=512m
object-cache-max=1g
ping-host=0.0.0.0
ping-port=9999
ping-concurrency=6
aws-cloudwatch-dimension-value=datomic
aws-cloudwatch-region=us-west-2
I would appreciate any advice you can offer! Thanks.#2021-11-0914:59Gavin DaviesI found this question a few above my own https://clojurians.slack.com/archives/C03RZMDSH/p1635985753218600 - does this mean I cannot use ensure-transactor for SQL?#2021-11-0915:14Joe LaneHey @U02LH3SBBEJ, the information you're looking for is on the https://docs.datomic.com/on-prem/overview/storage.html page. For your storage you're going to want to follow https://docs.datomic.com/on-prem/overview/storage.html#sql-database. bin/ensure-transactor currently supports DynamoDB.#2021-11-0915:14Gavin Daviesthank you 🙂 Will give that a go!#2021-11-0915:16Joe LaneStart from the top and once you finish the SQL section don't skip the other non-storage sections, you will need them.#2021-11-0915:17Gavin Daviesso, we already have our database, the connection all works and what I want to do is turn on Cloudwatch metrics, I’m not clear on how specifically to turn on Cloudwatch metrics?
(this is a pre-existing app, been running about 3 years, it just needs Cloudwatch 🙂 )
Please excuse my ignorance, this is day 0 for me of trying to work with Datomic 🙂#2021-11-0915:18Gavin DaviesI guess that I don’t need ensure-transactor because I created the AWS role myself?#2021-11-0915:19Joe Laneensure-transactor is a convenience script, but since you're already running in prod, you probably don't need it 🙂#2021-11-0915:21Joe LaneBefore we get into the Cloudwatch metrics effort, can you describe how you know the "Clojure+Datomic" application is performing poorly? What have you observed to lead you to that conclusion?#2021-11-0915:21Gavin Daviesahh, right
Should I expect the metrics to be appearing? Like, I assume there’s a heartbeat metric or something?
OR does my app have to explicitly say “send this metric to Cloudwatch”?#2021-11-0915:23Gavin Davies> can you describe how you know the “Clojure+Datomic” application is performing poorly? What have you observed to lead you to that conclusion?
Hoo boy, that’s a long story 😄 I’ll condense!
Customers reported degrading performance over time, as the data set has grown, we get operations like “get 1000 items into a <select> list” taking either 5ms or 10s under load.
So basically, we measure the page with a stopwatch 😄 it’s not very sophisticated, we need to get observability in place.
So I’m guessing some kind of CPU or memory contention, so the first step is to get some instrumentation in there and measure what is going on.
The peer has some instrumentation via statsd so I can see that in Cloudwatch. However, the transactor is not currently outputting anything to Cloudwatch, so today I am trying to add that.#2021-11-0915:25Gavin Daviesdo I need to have the S3 log copying bucket as an intermediary for Cloudwatch logs? i only added the aws-cloudwatch* lines, not an aws-s3* line… I don’t know if I need the S3 or if I can go straight to cloudwatch?#2021-11-0915:26Joe LaneAre any of the issues related to transactions? If not, then your problem squarely sits inside the peers (not trying to dissuade you from adding metrics to the transactor).#2021-11-0915:26Joe LaneIf you want transactor logs to go into a cloudwatch logstream you will need to install and manage the cloudwatch agent yourself.#2021-11-0915:27Joe LaneThe built-in logging facilities solely upload to S3 (I think it's daily by default)#2021-11-0915:27Gavin DaviesI have no idea what the issues relate to yet, I’m sorry to say.
I don’t really want the logs at this stage, all I want is things like heap usage etc to be pumped to Cloudwatch#2021-11-0915:28Gavin Daviesthe problem may well be the peers, unfortunately we have no visibility at all yet. I was hoping I could just enter a few lines of config and get the JVM/datomic metrics pushed to Cloudwatch?#2021-11-0915:29Joe Lanehttps://docs.datomic.com/on-prem/overview/aws.html#other-storages
^^ This should be all you need.#2021-11-0915:32Gavin DaviesOK, thanks. I have been through that and unfortunately I have failed to see any metrics in Cloudwatch.
My background is in AWS rather than in Clojure/Datomic. I believe I have set the IAM role correctly and I have redeployed my transactor image with the correct config to send metrics to Cloudwatch.
I have not set up anything for S3.
Does the Cloudwatch bit rely on the S3 bit in some way? I’d have assumed not but maybe there’s some dependency?
I see no logs even acknowledging Cloudwatch - when the transactor starts, I don’t see any mention of Cloudwatch, would I expect to?#2021-11-0916:02jaretHey @U02LH3SBBEJ I noticed that you have a metrics callback configured are your metric lines reporting there?
If the s3 dimension is not setup it will create a bucket for log copying. Did an s3 bucket get made on your system? In non-DDB systems customers typically setup an s3 bucket and point #aws-s3-log-bucket-id= to it. I'd recommend setting that up with an exisiting bucket just to see if you get logs copied to the bucket.
The piece that always trips up non DDB users is the permissions on the transactor role. https://docs.datomic.com/on-prem/overview/aws.html#other-storages. Can you confirm you have those perms set?#2021-11-0916:05Joe LaneAlso @U02LH3SBBEJ, do you see metrics if you comment out the metrics-callback=<REDACTED>.datomic-logging.core/report-metrics?#2021-11-0916:05jaretI know you walked up to this system, but do you know if this worked previously? Does this work for another system? Where did this system live previously and are you copying it to test performance or tweaking it?#2021-11-1010:00Gavin DaviesYESSSSSS! IT WORKS! Thank you all! ❤️
Removing the metrics-callback means I am now seeing datomic stats in Cloudwatch. What a relief!
Thank you all so much{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-11-1010:05Gavin Davies> jaret [4:05 PM]
> I know you walked up to this system, but do you know if this worked previously? Does this work for another system? Where did this system live previously and are you copying it to test performance or tweaking it?
Great questions!
I have been at $ORG for a year, this app is about 4 years old.
It runs fine for other clients with smaller data sizes, which are in separate AWS accounts which are identically configured via Gruntworks. Our largest client recently onboarded a bunch of new customers and that seems to have degraded performance for “poor” to “intermittently unacceptable”.
I am not moving the system, it remains hosted in ECS, I’ve just been parachuted in like “hey Gav you’ve got (expired!) AWS certification can you make this perform more gooder and stuff until we can EOL this application in 6 months”
So I suspect I may have more questions, but now I can see what the transactor is up to, I feel like I’ve got a fighting chance! (the peer was already emitting metrics to Cloudwatch){:tag :div, :attrs {:class "message-reaction", :title "joy"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😂")} " 1")}
#2021-11-1023:07jaret@U02LH3SBBEJ When you have questions feel free to reach out to https://www.datomic.com/support.html. Cases can be created by e-mailing <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> or <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>. We'd be happy to help with any performance issue or tuning and perhaps we can have a call to give you some basic tips for reviewing/understand what your bottlenecks are likely to be.{:tag :div, :attrs {:class "message-reaction", :title "white_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✅")} " 1")}
#2021-11-1112:56Ivar RefsdalDo you have a REPL into prod or a connection to the prod database?
I've used https://github.com/ptaoussanis/tufte/ for profiling Clojure code with decent success.
My guess would be that your peer query is slow, but that is just a guess of course.{:tag :div, :attrs {:class "message-reaction", :title "white_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✅")} " 1")}
#2021-11-1114:51Gavin Daviesthanks, we will reach out to Datomic support and our Clojure guy is looking into Tufte :-)#2021-11-0916:52michele mendeldatomic.client.api vs datomic.api
Is datomic.api only for on-prem, since it's documentation is only found there - ?
I tried it on my cloud setup, but it didn't work (maybe I need something in my deps).#2021-11-0917:03favila> Is datomic.api only for on-prem#2021-11-0917:03favilayes#2021-11-0917:18michele mendelThanks#2021-11-1006:29michele mendelBtw, why is this api not part of the cloud api? It looks that it has some convenient functions.#2021-11-1012:13favilaThe client api is designed for low bandwidth (ie possible out-of-process implementation) few client dependencies, and little client state. Many portions of the peer api simply can’t be implemented in a way that meets those goals#2021-11-1013:06michele mendelI see. Thanks again#2021-11-0920:46mynomotoAre eid resolved inside a tuple when using d/pull? Example:
[{:db/ident :example/id
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one}
{:db/ident :example/other
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :other/id
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
{:db/ident :example/id+other
:db/valueType :db.type/tuple
:db/tupleAttrs [:example/id :example/other]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}]
(d/pull db [:*]
[:example/id+other [1522 [:other/id 520832909]]])#2021-11-0920:47favilatuple values are not inspected for entity refs; this won’t work#2021-11-0920:48mynomotoOk, so I need to query to be able to resolve this right?#2021-11-0920:48favilaYou can use [:example/id+other [1552 (d/entid db [:other/id 520832909])]]#2021-11-0920:49favilad/entid resolves any entity reference to an entity id if it can#2021-11-0920:49favilanil if it can’t#2021-11-0920:49mynomotoOh, this is better, it should suffice. Thanks!#2021-11-0920:49favilaon-prem only. I don’t know how the cloud folks manage this without a query#2021-11-0920:50mynomotoWell, this is a query right? It will be done before calling d/pull, or am I misunderstanding?#2021-11-0920:50favilait’s not a d/q query, i.e. it doesn’t invoke a datalog query engine#2021-11-0920:51favilabut it will do IO if it needs to, if that’s what you mean by query#2021-11-0920:52mynomotoYeah, I see what you mean. Thanks, this was really helpful. I could not find that information on docs/searching the web.#2021-11-0920:47mynomotoIs there something like this that works? I'm using on-prem if this is relevant.#2021-11-1008:52Kris CHello, is it possible to have (define) a composite tuple if one of the keys (tuple attributes) has a :db.cardinality/many ?#2021-11-1013:12Benjaminwhere can I read on the difference between onprem and cloud?#2021-11-1015:04kennyhttps://docs.datomic.com/on-prem/cloud/moving-to-cloud.html{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-11-1015:57kennyIs it expected that you cannot pull from the same eid in two locations in the :find of a query?
(d/q '[:find (pull ?e [:db/doc]) (pull ?e [:db/ident])
:where
[?e :db/ident :db/cardinality]]
(d/db conn))
Execution error (ArrayIndexOutOfBoundsException) at datomic.core.datalog/fn$project (datalog.clj:560).
Index 1 out of bounds for length 1#2021-11-1015:58kenny#2021-11-1100:22kennyCreated an ask on this since it seems like a bug: https://ask.datomic.com/index.php/682/pull-identical-var-in-two-find-positions#2021-11-1107:45Lennart BuitI’ve been bitten by this before too#2021-11-1107:47Lennart BuitYou can also not do {:find [?e (pull ?e [*])] …}#2021-11-1015:59kennyCreating a new var name lets you do it:
(d/q '[:find (pull ?e [:db/doc]) (pull ?e2 [:db/ident])
:where
[?e :db/ident :db/cardinality]
[(identity ?e) ?e2]]
(d/db conn))
=>
[[#:db{:doc "Property of an attribute. Two possible values: :db.cardinality/one for single-valued attributes, and :db.cardinality/many for many-valued attributes. Defaults to :db.cardinality/one."}
#:db{:ident :db/cardinality}]]#2021-11-1016:46BenjaminI'm gathering test statistics in my database, e.g. what errors were logged during a test run (so we can have stats about our tests). Would you create idents for each error type and then have an "error log" entity that has a ref attribute. Or would you make the error type something like a string?#2021-11-1016:51favila(s/string/keyword/)#2021-11-1016:51Benjamins = symbol?#2021-11-1016:52favilasorry that’s sed. I’m saying why consider a string when you could use an actual keyword type#2021-11-1016:52Benjaminah lol#2021-11-1016:52Benjaminyea makes way more sense#2021-11-1016:53favilaAdvantage refs/enums: you can rename them easily, you can attach metadata to their entities, they can be semi-closed, self-describing (in db) enumerated sets (i.e. open, but ceremony and thought required to make new ones){:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 1")}
#2021-11-1016:55favilaDisadvantage: d/pull api returns them as entities not keywords, which can be annoying if you want to represent them as keyword values in code. You shouldn’t have a very large number (tens of thousands) of idents for performance reasons. You need to change schema to add them, which isn’t good if you don’t control the set#2021-11-1016:55favilausing keywords instead is basically the opposite tradeoffs vs the above#2021-11-1016:56Benjaminthanks. I get a bit the feeling for starters keywords is fine#2021-11-1016:56favilaIf you use a keyword, you can still use an attribute predicate to enforce the set if you need (e.g. guard against typos), but now enlarging the set requires a deploy, not just a schema change{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-11-1021:42bhurlowAny Datomic users have strategies for generating good data sets for testing? Anyone using generative testing against the Datomic schema?#2021-11-1109:25octahedrionI've used clojure.test.check.generators with Datomic to do property based testing of databases over time (a single database changing over time) and "space" (a space of databases with different variations) with dev-local and d/with and it works well{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-11-1112:53Ivar RefsdalDatomic on prem peer question:
Is there any way to access the tomcat JDBC datasource connection pool that Datomic creates?
I want to do a select 1 for a connection as part of a health check endpoint.
Thanks#2021-11-1113:47jaretHi Ivar, there is no api for manipulating storage directly. What problem are you trying to solve? We do have transactor health checks available: https://docs.datomic.com/on-prem/overview/transactor.html#health-check-endpoint#2021-11-2215:59Ivar RefsdalI'm trying to add a /health endpoint for my peer on-prem application that checks if the database connection is OK#2021-11-1117:07kkuehneHi, there I'm getting some errors now when I'm trying to get the any Datomic dev-local dependency with the following message
error in process sentinel: Could not start nREPL server: Could not find artifact com.datomic:dev-local:jar:1.0.238 in central ()
Could not find artifact com.datomic:dev-local:jar:1.0.238 in clojars ()
We're using this for our CI and dev environments and Datomic-Cloud for production. Everything is set up locally and it worked before. Does anybody have similar problems currently?#2021-11-1117:13Benjaminmaybe fixed when you install dev-tools again? there is a script install in the zip archive#2021-11-1117:19Alex Miller (Clojure team)That artifact is not in a public repo so you either need to install it (locally as suggested) or use your authenticated repo access#2021-11-1117:56kkuehneThanks, I'll try that.#2021-11-1117:29BenjaminI can't do :find [?e ...] with the client api, is there an alternative that returns stuff in a collection instead a collection of collections?#2021-11-1117:34Joe Lane(into #{} cat (do-the-q db)){:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-11-1123:28jdkealyIs there some setting in postgres or dynamo transactor settings that allows you to store your data in an unencrypted format... i.e. human-readable storage ?#2021-11-1123:57favilaIt’s not encrypted, it’s gzipped fressian#2021-11-1213:13donavanHas anyone seen this error before when creating an Ions client:
{
"Type": "java.lang.AssertionError",
"Message": "Assert failed: cfg",
"At": [
"datomic.cloud.client.local$create_client",
"invokeStatic",
"local.clj",
171
]
}
d/client is being passed what I think is the correct options, something like:
{
"ServerType": "Ion",
"Region": "region",
"System": "system-name",
"Endpoint": ""
}
☝️ is from CloudWatch logs… the real value is edn#2021-11-1213:29jaretHey @U0VP19K6K does your endpoint really have the proxy port? If you're in cloud your ion won't need the proxy port. What version of Datomic Cloud are you using?#2021-11-1213:29jaretI would audit all your ion calls to client and make sure they are all valid. And confirm you don't have dev-local or on-prem api's deployed with your code.#2021-11-1213:31donavanWe’re a bit old 616-8879 (upgrading is another task)#2021-11-1213:32donavanI’m busy refactoring so I may well have mangled the config.#2021-11-1213:32donavanSorry, TBC I get the above error when creating the client#2021-11-1213:42donavan@U1QJACBUM does datomic.cloud.client.local$create_client seem to imply that dev-local dep is ending up in the classpath (that final .local)?#2021-11-1215:52donavanThe Cloudformation EndpointAddress output matches what I posted above, i.e. it includes the port#2021-11-1216:30donavanWhen you say “don’t have dev-local or on-prem api’s deployed with your code” you mean they’re not included in our deps.edn and are thus not in the classpath on the ion instance then I think I can confirm that they are not. I found a classpath printout in the “IonHttpDirectStarted” event and they do not appear there (though I couldn’t find any client api dep at all though)#2021-11-1515:28jaret@U0VP19K6K sorry! The endpoint should be correct for your version. Can you share your deps.edn?#2021-11-1515:30jaretAlso does this error happen when you deploy and get a client, or when you try to get a client after deployed? Can you use the same map to connect locally? Do you get the map from env vars?#2021-11-1515:30jaretI suspect that get-param might not be returning what you expect.#2021-11-1909:49donavanThanks Jaret.
If anyone comes across this, it’s addressed here https://docs.datomic.com/cloud/troubleshooting.html#assert-failed#2021-11-1216:27BenjaminJo what is a good way to get data from a python program into datomic cloud? The rest api is legacy, right?#2021-11-1216:39Benjaminnvm I'm going back to the drawing board concerning the python part 😛#2021-11-1216:48ghadiyou can always deploy a datomic Ion that transacts information, then invoke it from python{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-11-1216:48ghadias in over http, lambda, or hooking up the ion to a sqs queue - the world is your oyster{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2021-11-1216:49ghadibut you can't / shouldn't want to connect to datomic cloud directly from python#2021-11-1216:50ghadibtw the rest API was an on-prem (not cloud) facility#2021-11-1216:50Benjaminah#2021-11-1419:00Drew VerleeIt's somewhat confusing that datomic cli client has an option to pass a aws profile, but that seems to be lower on the priority then the set ENV#2021-11-1419:00Drew Verleee.g setting -p to the profile i need didn't work, but exporting the env to override the one currently set did.
ok, now it is taking the profile value over the ENV. ok moving on.#2021-11-1515:07Tatiana KondratevichHey! I'm using integrant in my app on ion. At the moment, as in ion-event-example, I call d/connect only in transactions. My integrant init-key returns (partial get-connection config). Based on this, do we open the connection only when called? Maybe someone made a similar integrant component. What is the best way to do this?
Here's my code
(def get-client
(memoize
(fn [config]
(d/client
(cond
(:dev? config) {:server-type :dev-local
:storage-dir :mem
:system "dev"}
:default {:server-type :ion
:region (utils/get-param "region")
:system (utils/get-param "system-name")
:endpoint (utils/get-param "endpoint")})))))
(defn get-connection
"Get shared connection."
[config]
(utils/with-retry #(d/connect (get-client config) {:db-name (utils/get-param "db-name")})))#2021-11-1704:11hdenNot sure if it’s the best solution, but I’ve made a duct / integrant module for this.
https://github.com/hden/duct.module.datomic#2021-11-1520:09kennyWould it be a fair expectation that a single value returning query aggregate would be more performant than pulling the same data & aggregating client side?#2021-11-1520:23favilaClient api: almost certainly; peer api or ion: maybe.#2021-11-1520:24kenny(using client api)#2021-11-1520:24kenny& yeah, that's what I thought. Here's some stats for a test of 50 runs with each approach:
• aggregate: 2772ms avg, SD 718, min 2150, max 6415
• pull: 2317ms avg, SD 782, min 1807, max 5682
#2021-11-1520:25kennyThis is over a fairly large dataset. For some reason, pulling & sending thousands of maps over the wire is quicker than aggregating on the db.#2021-11-1520:26favilaThe query engine itself doesn’t have any aggregation smarts: it realizes a whole result set before it aggregates, and sometimes memory pressure can cause that to be slow. This is why index-pull or d/datoms with an incremental aggregator can sometimes be faster. On a peer, it can very often be much faster.#2021-11-1520:26favilaBut on client-api especially, the smaller the pipe between the client and peer, the more you want to push the aggregation into the peer.#2021-11-1520:28kennyWouldn't you realize the whole result set regardless of pull or aggregate?#2021-11-1520:28kennye.g., (pull ?e my-pattern) vs ?by (sum ?x)#2021-11-1520:29favilausing q or qseq?#2021-11-1520:29kennyq#2021-11-1520:29favilaoh, yes, that makes no sense{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 1")}
#2021-11-1520:30favilamaybe it’s the grouping around ?by?#2021-11-1520:31kennyWhat do you mean?#2021-11-1520:31kennyCaveat: I am attempting to aggregate 13 optional values. Maybe it doesn't like that many aggregates?#2021-11-1520:32kennyQuery looks like this, where the find var & where clause is repeated 12 more times for each value I aggregate
{:find [?services
(sum ?v)],
:where [[(get-else $ ?e my-attr 0.0) ?v]],,
:with [?r]}#2021-11-1520:33favilawhat’s the pull version look like?#2021-11-1520:35kennyBetter comparison (first option has 12 additional clauses in where & 12 more find vars)
{:find [?services (sum ?v)],
:where [[?r :x ?e]
[(get-else $ ?e :my-attr 0.0) ?v]],,
:with [?r]}
{:find [(pull ?r [:services
{:x [*]}])]
:where [[?r :x ?e]]}#2021-11-1520:36kennye.g.,
{:find [?services (sum ?v) (sum ?2) ... (sum ?vN)],
:where [[?r :x ?e]
[(get-else $ ?e :my-attr 0.0) ?v]
[(get-else $ ?e :my-attr2 0.0) ?v2]
...
[(get-else $ ?e :my-attrN 0.0) ?vN]
]
:with [?r]}#2021-11-1520:39favilawhere is services from?#2021-11-1520:39favilaI would say the result set after :where is going to be much bigger for the aggregation one#2021-11-1520:40favilawhat happens if you sum only one ?v at a time? (i.e. 12 queries)#2021-11-1520:44kennyOh, sorry - omitted that in creating the example. Services is from ?r. #2021-11-1520:45kennyWhy do you think the aggregation one is larger?#2021-11-1520:46kennyI'll try that in 5m. #2021-11-1520:46favilait has more columns#2021-11-1520:47kennyHow?#2021-11-1520:47favila?e ?r ?services ?vX… vs ?r ?e#2021-11-1520:48kennyHmm. Why'd you add r and e at the end?#2021-11-1520:48favilathe pull version only has ?r and ?e#2021-11-1520:48favilaI’m comparing the two#2021-11-1520:49kennyRight. Why is there dupes at the end of https://clojurians.slack.com/archives/C03RZMDSH/p1637009275327000?thread_ts=1637006953.319600&channel=C03RZMDSH&message_ts=1637009275.327000#2021-11-1520:49favilavs = versus#2021-11-1520:50kennyOh haha. Oops #2021-11-1520:51kennyI see what you mean. I was trying to express: for a ?services, sum up all these values. #2021-11-1520:55favilais :services one or many?#2021-11-1520:58kennyone#2021-11-1520:59kennySo, if I understand you correctly, I should compare the result set size of:
{:find [?service ?v ?v2 ... ?vN],
:where [[?r :service ?service]
[?r :x ?e]
[(get-else $ ?e :my-attr 0.0) ?v]
[(get-else $ ?e :my-attr2 0.0) ?v2]
...
[(get-else $ ?e :my-attrN 0.0) ?vN]]
:with [?r]}
to
{:find [(pull ?r [:service
{:x [*]}])]
:where [[?r :service ?service]]}#2021-11-1521:01favilaput it that way, you don’t even need ?service, it may be dropping it#2021-11-1521:01favilaso :where is just ?r#2021-11-1521:01favilarows is number of unique ?r, cols is 1#2021-11-1521:03favilaare your benchmark numbers including whatever aggregation code you rolled yourself for the pull?#2021-11-1521:09favilaand the thing I was curious about is how this performs comparatively:
(->> attrs
(mapv #(d/q '{:find [?service ?attr (sum ?v)]
:with [?r]
:in [$ ?attr]
:where [[?r :service ?service]
[?r :x ?e]
[(get-else $ ?e ?attr 0.0) ?v]]
:with [?r]}
db %))
(reduce (fn [aggs [service attr s]]
(assoc-in aggs [service attr] s))
{}))#2021-11-1521:15kenny> you don’t even need ?service, it may be dropping it
What do you mean? I need the sums by service#2021-11-1521:16kennyThat is so much better. Will try now.#2021-11-1521:19kennyWowza. Your query version returns avg 531ms.#2021-11-1521:20favilawith all 12 attrs?#2021-11-1521:20kennyYes#2021-11-1521:21favilathat’s even issuing them serially#2021-11-1521:21kennyAs opposed to 12 concurrent for each attr?#2021-11-1521:21favilayep#2021-11-1521:22favilaso, it’s something about large result sets, donno what exactly#2021-11-1521:22favilabut in general datomic’s datalog is really bad at aggregating#2021-11-1521:22kennyOk so stepping back. Do we know why the other was so slow? It seems like it an equivalent query.#2021-11-1521:22favilaso keep it simple#2021-11-1521:23kennyOh yeah. Your query is so much better. Thank you for that.#2021-11-1521:23favila“the other” meaning summing all 12 at once?#2021-11-1521:23kennyYes#2021-11-1521:23favilanot the pull#2021-11-1521:23kennyRight#2021-11-1521:25favilaI suspect it’s very wide, high-cardinality intermediate result-sets from the where#2021-11-1521:26kennyIs there a way to know?#2021-11-1521:26favilasure, source code#2021-11-1521:26favilaor maybe a profiler#2021-11-1521:26kennyDatomic's?#2021-11-1521:26favilayes{:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 1")}
#2021-11-1521:27kennyWell, this is extremely exciting. 2-3s -> 0.5s is a huge win for this query. Thank you for taking the time to help.#2021-11-1521:30favilacan you indulge me once more?#2021-11-1521:31kennySure#2021-11-1521:31favilaHow does this compare:#2021-11-1521:31favila(->> attrs
(d/q '{:find [?service ?attr (sum ?v)]
:with [?r]
:in [$ [?attr ...]]
:where [[?r :service ?service]
[?r :x ?e]
[(get-else $ ?e ?attr 0.0) ?v]]
:with [?r]}
db )
(reduce (fn [aggs [service attr s]]
(assoc-in aggs [service attr] s))
{}))#2021-11-1521:31favilawhere attrs is a vector of your attrs#2021-11-1521:31favilanotice we added columns rows (x12) but not rows columns#2021-11-1521:32favilaalso I’m wondering why there’s a :with ?r that is thrown away#2021-11-1521:33kennyHow did we add columns?#2021-11-1521:33favilasorry, I meant rows not columns{:tag :div, :attrs {:class "message-reaction", :title "ok_hand::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 1")}
#2021-11-1521:34favilaalso the ?r is obvious now; although I think my reduction may be wrong#2021-11-1521:34favilaif your intention is sums per service#2021-11-1521:34favilaservice+attr#2021-11-1521:35kennyThe output needs to be by ?service, but the sum must include all ?r#2021-11-1521:37kennyBtw, I added 1 more clause to pull out the db ident#2021-11-1521:40kennyWe're back to 3s with the input version#2021-11-1521:41favilaweird#2021-11-1521:41favilamaybe it is just memory pressure#2021-11-1521:41favilado you know if GC is happening during this query?#2021-11-1521:41kennyThe output is difference btw. The input version returns one map for all ?attr passed in. The non-input returns one map for all ?attr the db is actually using#2021-11-1521:42favilaI’m not sure what you mean. Could you just show the query code?#2021-11-1521:43kennyNon-input:
{:query '{:find [?services ?attr-k (sum ?value)],
:where [[?r :cs.model.monitored-resource/service ?services]
[?r :cs.model.monitored.cloud-acct/cloud-account ?cloud-acct]
[?cloud-acct :cs.model.monitored.cloud-acct/mode :cs.model.monitored.cloud-acct/mode-fetch-all]
[?r :cs.model.value-breakdown/orig-value-breakdown ?orig-value-breakdown]
[?orig-value-breakdown ?attr ?value]
[?attr :db/ident ?attr-k]]
:keys [service
vb-k
sum],
:with [?r]},
:args (list db),
:timeout 60000}
Input
{:find [?services ?attr-k (sum ?value)],
:in [$ [?attr ...]]
:where [[?r :cs.model.monitored-resource/service ?services]
[?r :cs.model.monitored.cloud-acct/cloud-account ?cloud-acct]
[?cloud-acct :cs.model.monitored.cloud-acct/mode :cs.model.monitored.cloud-acct/mode-fetch-all]
[?r :cs.model.value-breakdown/orig-value-breakdown ?orig-value-breakdown]
[(get-else $ ?orig-value-breakdown ?attr 0.0) ?value]
[?attr :db/ident ?attr-k]]
:keys [service
vb-k
sum],
:with [?r]}#2021-11-1521:43kenny(count (dbu/q non-input))
=> 15
(count (dbu/q input))
=> 39
#2021-11-1521:44kennyThe the input version, there are many maps that have a sum of 0.0#2021-11-1521:45favilathere is no get-else to turn those missing items into 0, they will just be missing from the output#2021-11-1521:45kennyRight#2021-11-1521:45kenny(and that's totally ok)#2021-11-1521:45favilaalso this part forces EAVT [?orig-value-breakdown ?attr ?value]#2021-11-1521:45favilais that OK? Is every ?attr in there trustworthy?#2021-11-1521:46kennyGood point. It happens to be right now, but that seems like a very sketchy assumption.#2021-11-1521:47favilayou can also drop get-else if it doesn’t matter#2021-11-1521:48kennyDropping get-else avgs around 2.1s & has the same 15 results.#2021-11-1521:49kennyGeez, yeah I don't think I can use [?orig-value-breakdown ?attr ?value] . Nothing guarantees a non-numeric key won't get added to ?orig-value-breakdown#2021-11-1521:50kennySo now we're
;; non-input
{:find [?services ?attr-k (sum ?value)],
:where [[?r :cs.model.monitored-resource/service ?services]
[?r :cs.model.monitored.cloud-acct/cloud-account ?cloud-acct]
[?cloud-acct :cs.model.monitored.cloud-acct/mode :cs.model.monitored.cloud-acct/mode-fetch-all]
[?r :cs.model.value-breakdown/orig-value-breakdown ?orig-value-breakdown]
[?orig-value-breakdown ?attr ?value]
[?attr :db/ident ?attr-k]]
:keys [service
vb-k
sum],
:with [?r]}
;; input
{:find [?services ?attr-k (sum ?value)],
:in [$ [?attr ...]]
:where [[?r :cs.model.monitored-resource/service ?services]
[?r :cs.model.monitored.cloud-acct/cloud-account ?cloud-acct]
[?cloud-acct :cs.model.monitored.cloud-acct/mode :cs.model.monitored.cloud-acct/mode-fetch-all]
[?r :cs.model.value-breakdown/orig-value-breakdown ?orig-value-breakdown]
[?orig-value-breakdown ?attr ?value]
[?attr :db/ident ?attr-k]]
:keys [service
vb-k
sum],
:with [?r]}#2021-11-1521:51favilaThe part I don’t get is why this
(mapv #(d/q {:find [?services ?attr-k (sum ?value)],
:in [$ ?attr]
:where [[?r :cs.model.monitored-resource/service ?services]
[?r :cs.model.monitored.cloud-acct/cloud-account ?cloud-acct]
[?cloud-acct :cs.model.monitored.cloud-acct/mode :cs.model.monitored.cloud-acct/mode-fetch-all]
[?r :cs.model.value-breakdown/orig-value-breakdown ?orig-value-breakdown]
[?orig-value-breakdown ?attr ?value]
[?attr :db/ident ?attr-k]]
:keys [service
vb-k
sum],
:with [?r]} db %) attr-vec)
would be 0.5 seconds total, but this:
(d/q {:find [?services ?attr-k (sum ?value)],
:in [$ [?attr ...]]
:where [[?r :cs.model.monitored-resource/service ?services]
[?r :cs.model.monitored.cloud-acct/cloud-account ?cloud-acct]
[?cloud-acct :cs.model.monitored.cloud-acct/mode :cs.model.monitored.cloud-acct/mode-fetch-all]
[?r :cs.model.value-breakdown/orig-value-breakdown ?orig-value-breakdown]
[?orig-value-breakdown ?attr ?value]
[?attr :db/ident ?attr-k]]
:keys [service
vb-k
sum],
:with [?r]} db attr-vec)
would be over 2 seconds#2021-11-1521:52favilaAll I can think is memory pressure somewhere in the system#2021-11-1521:52kennyIt seems happy#2021-11-1521:54kennyInput: 2112ms avg
Non-input: 561ms avg#2021-11-1521:56kennyLol
{:find [?services ?attr-k (sum ?value)],
:in [$ ?my-attr-set]
:where [[?r :cs.model.monitored-resource/service ?services]
[?r :cs.model.monitored.cloud-acct/cloud-account ?cloud-acct]
[?cloud-acct :cs.model.monitored.cloud-acct/mode :cs.model.monitored.cloud-acct/mode-fetch-all]
[?r :cs.model.value-breakdown/orig-value-breakdown ?orig-value-breakdown]
[?orig-value-breakdown ?attr ?value]
[?attr :db/ident ?attr-k]
[(contains? ?my-attr-set ?attr-k)]]
:keys [service vb-k sum],
:with [?r]}
#2021-11-1521:57kenny^ 628ms avg#2021-11-1521:59favilaFor whatever reason, EAVT on ?orig-value-breakdown is cheaper than 12 AEVT on ?attr ?orig-value-breakdown (assuming these queries are representative)#2021-11-1521:59favilaThat’s counter to my expectations#2021-11-1522:00kenny> assuming these queries are representative
Could you clarify this?#2021-11-1522:01favilaI’m just getting confused by the edits and revisions, pseudo and non-pseudo queries, not sure which ones correspond to which timing anymore#2021-11-1522:03kennyUnderstandable. You should see the size of the comment block I've got now 😅 Here's a summary:
Query 1
• 561ms avg
• Fastest option
• Downside is ?value is not guaranteed to be numeric.
{:find [?services ?attr-k (sum ?value)],
:where [[?r :cs.model.monitored-resource/service ?services]
[?r :cs.model.monitored.cloud-acct/cloud-account ?cloud-acct]
[?cloud-acct :cs.model.monitored.cloud-acct/mode :cs.model.monitored.cloud-acct/mode-fetch-all]
[?r :cs.model.value-breakdown/orig-value-breakdown ?orig-value-breakdown]
[?orig-value-breakdown ?attr ?value]
[?attr :db/ident ?attr-k]]
:keys [service vb-k sum],
:with [?r]}
Query 2
• 2112ms avg
• Idiomatic alternative to 1 with severe perf impact.
{:find [?services ?attr-k (sum ?value)],
:in [$ [?attr ...]]
:where [[?r :cs.model.monitored-resource/service ?services]
[?r :cs.model.monitored.cloud-acct/cloud-account ?cloud-acct]
[?cloud-acct :cs.model.monitored.cloud-acct/mode :cs.model.monitored.cloud-acct/mode-fetch-all]
[?r :cs.model.value-breakdown/orig-value-breakdown ?orig-value-breakdown]
[?orig-value-breakdown ?attr ?value]
[?attr :db/ident ?attr-k]]
:keys [service vb-k sum],
:with [?r]}
Query 3
• 628ms avg
• Hack around 2 to ensure ?value is sumable.
{:find [?services ?attr-k (sum ?value)],
:in [$ ?my-attr-set]
:where [[?r :cs.model.monitored-resource/service ?services]
[?r :cs.model.monitored.cloud-acct/cloud-account ?cloud-acct]
[?cloud-acct :cs.model.monitored.cloud-acct/mode :cs.model.monitored.cloud-acct/mode-fetch-all]
[?r :cs.model.value-breakdown/orig-value-breakdown ?orig-value-breakdown]
[?orig-value-breakdown ?attr ?value]
[?attr :db/ident ?attr-k]
[(contains? ?my-attr-set ?attr-k)]]
:keys [service vb-k sum],
:with [?r]}#2021-11-1613:28favilaThis makes it look like my original supposition was false: intermediate result set size wasn’t the problem. It really looks like index choice is what matters.#2021-11-1613:29favilaWhat is the total time if you run Query 1 12 times with ?attr as an input parameter (a different attr each run)#2021-11-1615:28kennyI think your original supposition was right -- I didn't include the original query because the above are so much cleaner and I forgot about it 🙂 . In these cases, I think you're right again on index choice. I'll try that out.#2021-11-1615:39kennyDo you think it matters whether I use a literal for ?attr or a query input?#2021-11-1615:44kennyTried both literal and input.
Input
• 473ms avg
{:find [?services (sum ?value)],
:in [$ ?attr]
:where [[?r :cs.model.monitored-resource/service ?services]
[?r :cs.model.monitored.cloud-acct/cloud-account ?cloud-acct]
[?cloud-acct :cs.model.monitored.cloud-acct/mode :cs.model.monitored.cloud-acct/mode-fetch-all]
[?r :cs.model.value-breakdown/orig-value-breakdown ?orig-value-breakdown]
[?orig-value-breakdown ?attr ?value]]
:keys [service sum],
:with [?r]}
Literal
• 458ms avg
{:find [?services (sum ?value)],
:in [$]
:where [[?r :cs.model.monitored-resource/service ?services]
[?r :cs.model.monitored.cloud-acct/cloud-account ?cloud-acct]
[?cloud-acct :cs.model.monitored.cloud-acct/mode :cs.model.monitored.cloud-acct/mode-fetch-all]
[?r :cs.model.value-breakdown/orig-value-breakdown ?orig-value-breakdown]
[?orig-value-breakdown :cs.model.value-breakdown/provider-cost ?value]]
:keys [service sum],
:with [?r]}#2021-11-1615:45kennyI can try all 12 concurrently too. Curious what the impact on the DB CPU would be.#2021-11-1615:53kennyConcurrent
• Code looks similar to the below, just ran 50 tests & took avg.
• 2573ms avg
(def futs-f
(fn []
(mapv
(fn [vb-k]
(let [qmap {:query '{:find [?services (sum ?value)],
:in [$ ?attr]
:where [[?r :cs.model.monitored-resource/service ?services]
[?r :cs.model.monitored.cloud-acct/cloud-account ?cloud-acct]
[?cloud-acct :cs.model.monitored.cloud-acct/mode :cs.model.monitored.cloud-acct/mode-fetch-all]
[?r :cs.model.value-breakdown/orig-value-breakdown ?orig-value-breakdown]
[?orig-value-breakdown ?attr ?value]]
:keys [service sum],
:with [?r]},
:args [db vb-k],
:timeout 60000}
results (manifold.deferred/future (dbu/q qmap))]
results))
cs.model.value-breakdown/value-breakdown-ks)))
(time (mapv deref (futs-f)))#2021-11-1615:54kennySpikes CPU#2021-11-1615:56favilaThis strongly suggests that the object cache is simply not large enough to hold the AEVT indexes involved, and it has to swap them in and out as it runs#2021-11-1615:57favilaUsing EAVT to pull those related stats is the critical difference here#2021-11-1616:00kennyHmm. To be sure I'm following, we're talking about the internal Datomic call for the AEVT index component {:components [?attr ?orig-value-breakdown]} ?#2021-11-1616:02kennyFull view of the spike during the parallel queries.#2021-11-1616:05favila[?orig-value-breakdown ?attr ?value] This one. If ?attr is bound, query prefers AEVT; ?orig-value-breakdown is bound but ?attr is not, then EAVT. That’s really the only difference between the fast and slow approaches#2021-11-1616:09kennyWait, let me back up. The query for 1 attr https://clojurians.slack.com/archives/C03RZMDSH/p1637077448348900?thread_ts=1637006953.319600&cid=C03RZMDSH took under 500ms. Running all 12 concurrently took 2573ms on avg. So you're saying that the concurrent one took longer b/c "the object cache is simply not large enough to hold the AEVT indexes involved, and it has to swap them in and out as it runs" ?#2021-11-1616:10favilathat’s how it seems#2021-11-1616:10favilarunning 12 concurrently is no different than running 1 query with 12 attrs as input#2021-11-1616:10favilaor, I was trying to see if that would be so#2021-11-1616:11kennyAnd that is true, given our results, right?#2021-11-1616:11favilayes#2021-11-1616:12favilaIO is death#2021-11-1616:13kennyGot it. Very insightful. Our concurrent one is slightly worse than letting Datomic handle that.
Ok so the theory on the perf difference is that the query that uses the AEVT index is much slower due to a too small object cache.#2021-11-1616:14favilayeah, to verify this you should observe the object cache hit rate directly, or storage gets, or even just aggregate network#2021-11-1616:15favilaif this is right, then the EAVT query will have a higher hit rate, lower storage get, and lower network activity as it runs#2021-11-1616:15favilaand the AEVT version the opposite#2021-11-1616:16kennyI see. Ok. So you're supposing that the cache is large enough & warm enough to hold EAVT index but not the AEVT?#2021-11-1616:16favilayes#2021-11-1616:16favilaAre there any other query loads on this instance? it could be they are using EAVT of these indexes already (e.g. via pull).#2021-11-1616:17favilaso maybe they are already loaded, and when evicted to make more space they are quickly re-loaded#2021-11-1616:18kennyNo, just me testing these scenarios. Certainly one of the most common queries executed in this group would be a pull.#2021-11-1616:18favilaI mean direct d/pull; I’m fairly sure pull in a query uses AEVT also#2021-11-1616:19kennyAh. Then there's a lot of both d/pull and d/q + pull#2021-11-1616:20kennyIf the object cache was full, would we expect very little available memory?#2021-11-1616:21favilaI’m not sure how it works on cloud. On peer, by default half the heap space is reserved for object cache, and this is tuneable#2021-11-1616:22favilait doesn’t just consume heap opportunistically#2021-11-1616:22favilaso you may in fact have plenty of heap free, but still have a full and churning OC#2021-11-1616:23kenny> if this is right, then the EAVT query will have a higher hit rate, lower storage get, and lower network activity as it runs
This instance is running with ssd valcache so perhaps no network changes?#2021-11-1616:23favilaah, yeah then it would be disk#2021-11-1616:23favilathat makes the difference even more alarming#2021-11-1616:25kennyBecause you'd expect valcache to make up the difference with object cache not holding the aevt index?#2021-11-1616:27favilaI just wouldn’t expect it to be 4x slower#2021-11-1616:27favilavalcache is pretty fast#2021-11-1616:28favilait definitely wouldn’t make up the difference completely. A heap pointer is always going to be much much faster than IO+decompression+deserialization{:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 1")}
#2021-11-1616:28kennyI thought so 🙂 & I think it should have no problem loading whatever it needs given the ssd is 0.5tb. Total system is a little over 900m datoms, but this db we've been testing on is some fraction of that.#2021-11-1616:30kennyI'll run the tests again & try to align the disk usage metric with the test.#2021-11-1623:06kennyGot ambushed by a flurry of meetings 😢 Now to run the tests...#2021-11-1623:11kennyBooted up a brand new instance to make sure we've got a good starting point. Calling d/connect on the db we've been working with results in 14.6m bytes written.#2021-11-1623:18kennyI started with Query 3, the most likely candidate I will use later. The first run of the query took 3583ms. The second run took 1445ms. We now have a second spike in our graph that happened when I ran query 3 for the first time. It wrote 21.6m bytes.#2021-11-1623:24kennyRunning query 3 50 times resulting in an avg time of 666ms and no disk bytes read.#2021-11-1623:31kennyNow I ran the Concurrent query. The first run took 6074ms. The second run took 4343ms. And our 3rd disk write appears. It wrote 20.9mb.#2021-11-1623:37kennyRunning Query 4 50 times resulting in an avg time of 4067ms and no disk bytes read.#2021-11-1623:42favilaAre you sure this is the valcache drive? None of these seem to have reads. Do you not have access direct oc, valcache, and storage get/hit rate metrics?#2021-11-1623:43kennyGood point. I am not 100% certain - will double check aws docs. Perhaps it's simply not using valcache for some reason.#2021-11-1623:44kennyPerhaps because result is available in object cache?#2021-11-1623:44favilaIf that’s true the mystery only deepens#2021-11-1623:44favilaBut these metrics should be published directly, we shouldn’t have to infer from disk activity#2021-11-1623:46kennyThis is the full dashboard Datomic provides for a query group (plus my manual disk chart)#2021-11-1623:46kennyI added the vertical lines for each event.#2021-11-1623:47kennyActually, I think it's highly likely that disk chart is the valcache drive since we are seeing disk writes that directly align with when the queries happen.#2021-11-1623:47favilaIt claims hit rates of 100%. So my theory is not correct and I am out of ideas#2021-11-1623:48kennyI think that "Cache Hit Ratios" chart is only has lines for EFS, which is different.#2021-11-1623:49kennyFrom https://docs.datomic.com/cloud/whatis/architecture.html#caching, it seems Cloud's cache order is: 1) object cache 2) valcache 3) efs 4) s3 fallback#2021-11-1623:50kennySo it's skipping valcache for some reason or the chart's legend is misleading.#2021-11-1623:51kennyErm, ddb should be in that list somewhere, I'd think.#2021-11-1623:59kennyI sshed into the node to make sure we aren't going crazy. From df -h:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.5G 0 7.5G 0% /dev
tmpfs 7.5G 0 7.5G 0% /dev/shm
tmpfs 7.5G 496K 7.5G 1% /run
tmpfs 7.5G 0 7.5G 0% /sys/fs/cgroup
/dev/xvda1 8.0G 2.5G 5.6G 31% /
8.0E 173G 8.0E 1% /opt/datomic/efs-mount
/dev/nvme0n1 436G 102M 414G 1% /opt/ssd1
tmpfs 1.5G 0 1.5G 0% /run/user/4242
tmpfs 1.5G 0 1.5G 0% /run/user/1000
#2021-11-1700:00kennyVery interesting results I might say. Datomic's CF template attaches an EBS gp2 (non-ssd) drive to the nodes automatically. It surprises me that the drive has 2.5gb used. The nvme drive does have 102m written though.#2021-11-1700:05kennyOk, it's definitely written to the ssd drive.
tree -da /opt/ssd1/
/opt/ssd1/
├── datomic
│ └── valcache
│ ├── 000
│ ├── 001
│ ├── 002
#2021-11-1700:13kennyJust need info on what the Cache Hit Ratios chart mean. There are 2 paths:
1. Cache Hit Ratios comprises any Datomic cache type. So those points we've seen could mean that Datomic is reaching for valcache in our test queries.
2. Cache Hit Ratios really only includes data for the EFS cache. If so, why is it reaching for EFS when it likely has the data available in valcache?
Both of these would lead to more questions and still leave us with a mystery of why one query is slower than the other.#2021-11-1700:20kennyThis has been a very enlightening discussion. I believe we are well into the area of opening a support ticket 🙂#2021-11-1700:28kennyOne observation from our previous discussion. We had previously agreed on this:
> Ok so the theory on the perf difference is that the query that uses the AEVT index is much slower due to a too small object cache.
However, that doesn't seem to hold. I would think that once you load the AEVT index into the object cache subsequent queries using that index would be quick.#2021-11-1700:34favilaThat’s right. So either these metrics are deceiving or not revealing oc churn, or the theory is incorrect. I am pretty confident that the different indexes for that clause is all that is different, but cannot explain the performance difference #2021-11-1700:37kennyGot it. I've opened a support ticket with everything we learned here. Thank you so much for your help. Happy to update you on our findings if you're interested.#2021-11-1700:09kennyDatomic Cloud monitoring question: Does the "Cache Hit Ratios" for a query group only include cache hits for EFS or does it also include other cache types (object, valcache)?#2021-11-1714:00Yuriy ZaytsevIs there a REST API for datomic cloud? I’m trying to find a way to give some data for R&D team and they live in a python world only#2021-11-1716:04kennyNo. I think someone asked this recently and @ ghadi had the excellent idea of using Ions + API Gateway to expose one.#2021-11-1716:05Yuriy ZaytsevThanks for reply. I’ll try to search chat history for it#2021-11-1716:48respatializedIs the Analytics API not sufficient for this use case?
https://docs.datomic.com/cloud/analytics/analytics-jupyter.html#2021-11-1717:10Yuriy ZaytsevNo, unfortunately analytics can’t be used here. We have it and using it for the other things.#2021-11-1913:06BenjaminIs there a idiomatic way for generating tmpids? gensym or randomUUID come to mind - whatever I realized for my use case I already have a unique string at hand#2021-11-1914:28thumbnailFor us it depends on the use case. Generally the tempids are crafted by hand, and we use lookup refs for collections (I.e dynamic stuff). Gensym otherwise.
UUIDs can be a good tool too especially if you have an unique-identity field for them either way{:tag :div, :attrs {:class "message-reaction", :title "catjam"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "catjam", :src "https://emoji.slack-edge.com/T03RZGPFR/catjam/50c2b0ff9e925462.gif"}, :content nil})} " 1")}
#2021-11-1915:14Benjamin[?order :shop/line-items ??]
;; How do I get the order back when I have a line item?
;; I want to do what it says here
;;
;; like this but I have cardinality/many
[?release :release/artists ?artist]
#2021-11-1915:15Benjamin#2021-11-1915:18Joe Lane@benjamin.schwerdtner
(def order-from-line-item
'[:find ?order
:in $ ?line-item
:where
[?order :shop/line-items ?line-item]])
(def the-db (d/db the-conn))
(def line-item-id "abc123")
(d/q {:query order-from-line-item :args [the-db line-item-id]}){:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-11-1917:05BenjaminWhat is the most succinct way to assert a unique entity? I thought I can do something like
(d/transact
conn
{:tx-data
[{:db/id [:unique/id "bestid5"]}]})
the effect of the above is I think not what I want. When I pull
(d/pull
(d/db conn)
'[*]
[:unique/id "bestid5"])
=> #:db{:id nil}#2021-11-1917:09Joe LaneYou're close, (d/transact conn {:tx-data [{:unique/id "bestid5"}]}){:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2021-11-1917:20BenjaminNow I'm trying this to assert and reference the unique in the same tx
(d/transact
conn
{:tx-data
[{:airstats.pipeline/id "bestid4"} ;a unqiue
{:airstats.job/pipeline ;ref to unique
[:airstats.pipeline/id "bestid4"]}
{:airstats.job/pipeline ;I'll have multiple of those
[:airstats.pipeline/id "bestid4"]}]})
=>
"Unable to resolve entity: [:airstats.pipeline/id \"bestid4\"] in datom [-9223301668109598094 :airstats.job/pipeline [:airstats.pipeline/id \"bestid4\"]]",
Do I need to put a tmpid?#2021-11-1917:27Joe LaneYep, you need a tempid
(d/transact
conn
{:tx-data
[{:airstats.pipeline/id "bestid4"
:db/id "tempid-for-bestid4"} ;a unqiue
{:airstats.job/pipeline ;ref to unique
"tempid-for-bestid4"}
{:airstats.pipeline/id "bestid5"
:db/id "tempid-for-bestid5"}
{:airstats.job/pipeline ;I'll have multiple of those
"tempid-for-bestid5"}]})
I'm not sure how your :airstats.job/pipeline relates to :airstats.pipeline/id but the above will create two different "airstats.job"s, each job references one "airstats.pipeline"#2021-11-1917:29Benjamin:airstats.job/pipeline is supposed to be a ref to an entity with :airstats.pipeline/id and multiple jobs can ref 1 pipeline. Thanks will check. Maybe I should rethink the schema and have jobs as a reference component on the pipeline entity#2021-11-1917:31Joe LaneMy only generic advice for you is to try and avoid thinking in rectangles if you can. Datomic doesn't operate in structs of entities. Instead, like Clojure, it operates as sets of attributes.{:tag :div, :attrs {:class "message-reaction", :title "partywombat"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "partywombat", :src "https://emoji.slack-edge.com/T03RZGPFR/partywombat/be4a32b49fd9c093.gif"}, :content nil})} " 1")}
#2021-11-1917:31Joe Lane(TBC, I'm not saying what you were asking about was necessarily bad or particularly rectangular)#2021-11-1917:32Benjaminhaha I'll meditate on that!{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-11-1918:40kennyCould duplicate :where clauses impact query performance in any way?#2021-11-2017:03BenjaminIs it possible to assert and add entities to a lookup ref like this?
{:tx-data
[[:db/add
[:airstats.pipeline/id "bestid4"]
:airstats.pipeline/jobs
;; assert new entities here
[{:airstats.error/type "fo"}]
]]}
well something like this, I mean I want to add new "jobs" entities.
This code throws
{:cognitect.anomalies/category :cognitect.anomalies/incorrect,
:cognitect.anomalies/message
"Invalid list form: [#:airstats.error{:type \"fo\"}]",
:db/error :db.error/invalid-lookup-ref}#2021-11-2017:31favilaThe value slot of db/add or retract cannot be a map#2021-11-2017:32favilaEmit multiple ops or use the map form{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-11-2113:26Benjaminmultiple ops = multiple transactions right?#2021-11-2113:27favilaNo#2021-11-2113:28Benjaminah you meant multiple [:db/add ..] ?#2021-11-2113:28favilaYes{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2021-11-2113:26Benjamin(let [already? (into
#{}
cat
(d/qseq
'[:find ?key
:where
[_ :airstats.jira.ticket/key ?key]]
(d/db @conn)))])
(defn already?
[db key]
(d/pull
db
'[:db/id]
[:airstats.jira.ticket/key key]))
which is better?
the use case is to check if some ticket key is already in the db. So I don't assert it twice#2021-11-2115:47thumbnailIf you make the ticket key unique asserting twice will not work (or work as an upsert). The advantage of that is that it's atomic :)#2021-11-2118:14Benjaminhttps://docs.datomic.com/cloud/best.html#set-txinstant-on-imports what does it mean for txInstant to be newer than the transactors clock time? The second sentence seems to imply I shouldn't choose dates from the future else it needs to catch up.. ?#2021-11-2120:41thumbnailThat's right, but it is possible to set a txInstant in the past (as long as there's no newer datoms present).
As an example, we used it for an initial ETL job to instead of a created-at attribute{:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 1")}
#2021-11-2211:48Benjaminlooking for my cloud system endpoint, when I do aws cloudformation describe-stacks --stack-name <name> the only output that sounds like endpoint is this:
{
"OutputKey": "EndpointAddress",
"OutputValue": "",
"Description": "Stable entry address"
},
{:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 1")}
#2021-11-2212:42jaretThat outputvalue is your endpoint. You can also see your system endpoint https://docs.datomic.com/cloud/operation/howto.html#template-outputsunder the outputs tab. Specifically under the ClientApiGatewayEndpoint key. As a side curiosity, what version of Datomic Cloud are you using?#2021-11-2212:46jareton older versions of Datomic cloud the endpoint takes the format of:
#2021-11-2213:31Benjaminshould be the newest I selected from the top of the list in the marketplace configure#2021-11-2214:43Benjaminah I guess I should have used "production" instead of "solo"#2021-11-2214:57jaret@U02CV2P4J6S If you are looking to upgrade. Might I suggest that you use our split stack instructions and move to separate production templates on the latest. https://docs.datomic.com/cloud/operation/split-stacks.html.#2021-11-2214:59jaretThe https://docs.datomic.com/cloud/changes.html#936-9118 allows you to choose smaller instances to get the lower price of the "solo" template when we had the bifurcation. Marketplace does not allow us to remove the option for any users who were previously subscribed to select the solo topology even though it is now moot.#2021-11-2216:11BenjaminI see. Now I already launched a production on next to the solo one. Guess I check the split thing#2021-11-2216:12jaretYeah I think you should upgrade to the production template and follow our docs, reading https://docs.datomic.com/cloud/changes.html+ split stack page. I'd be happy to jump on a call to discuss.#2021-11-2216:12jaretIf you don't care about anything you made with the solo, I can also walk you through completely deleting.#2021-11-2217:07Benjaminfixed btw - awesome#2021-11-2219:16Daniel JompheDear Cognitect Datomic team, here's a renewed plea for a production-grade Datomic Cloud Backup & Restore solution.
Here we relied on the nice efforts by Tony Kay with some success, but we now see our fate is truly and only in your hands.#2021-11-2219:37vlaaadbtw we use this for datomic cloud backups https://github.com/lambdaforge/wanderung#2021-11-2219:39vlaaadwe actually wrap it in another, our ops-specific tool that does some other niceties like opening datomic-cloud proxies from dev machines: https://gitlab.com/arbetsformedlingen/taxonomy-dev/backend/jobtech-taxonomy-api-gitops/-/blob/master/src/jobtech_taxonomy_api_gitops/database.clj#L239#2021-11-2219:51Daniel JompheThanks vlaaad. Wanderung is intriguing, and it's nice to have your gitops util to look at.#2021-11-2219:52Daniel JompheDo you feel like it satisfies all normal requirements? E.g.
• truly coherent remapping of IDs (obviously) in the new DB
• capability to clean up PII when restoring to a non-prod DB#2021-11-2221:29vlaaadI think @UB95JRKM3 might provide more insight into this. What do you mean by coherent remapping?#2021-11-2221:31vlaaaddb ids are different between backups/restores, but they represent the same data graphs#2021-11-2221:31vlaaadthere are sanity checks for this stuff there#2021-11-2221:32vlaaadour datomic usage is a bit weird, since we use it as a "git" of sorts with history being a part of publicly exposed API (although wrapped so customer don't know about db ids and txs)#2021-11-2221:34vlaaadbecause of that, we have separate databases for edits and what is available to users in prod, and we copy and restore them between internal edits and prod read-only dbs, so we actively use the copying/restoring feature, and so far it seems to work fine#2021-11-2221:36vlaaadSame repo has tools for rewriting history as well
https://gitlab.com/arbetsformedlingen/taxonomy-dev/backend/jobtech-taxonomy-api-gitops/-/blob/master/src/jobtech_taxonomy_api_gitops/database.clj#L254#2021-11-2221:37vlaaadone thing to be aware of is that we can afford to hold all datoms in memory, so it might not be optimized for backups of big databases...#2021-11-2221:45Daniel JompheOk, you answered all the questions I had (including DB id coherency).
Thanks again!#2021-11-2312:39stuarthalloway@U0514DPR7 plea heard. We are working on it.{:tag :div, :attrs {:class "message-reaction", :title "clap"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👏")} " 4")}
#2021-11-2315:30Daniel JompheThanks for letting us know, Stu! 🎉#2021-12-0219:14tony.kay@U072WS7PE what would be really nice is some kind of time estimate. Some of us have regulators to satisfy, and “it will happen sometime in the future” is not a comfortable position to be in.#2021-11-2311:40Benjaminwhat is the difference between calling d/entid and passing the attr as keyword?
(d/qseq
'[:find ?val
:in $ ?attr
:where [_ ?attr ?val]]
db
(d/entid attr))
(d/qseq
'[:find ?val
:in $ ?attr
:where [_ ?attr ?val]]
db
attr)
also entid is a peer only thing, right? So is passing the keyword the way to go on the client?{:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 1")}
#2021-11-2312:45stuarthallowayHi @U02CV2P4J6S! If you are finding all values for an attribute you do not need query and might be better served with https://docs.datomic.com/cloud/query/raw-index-access.html#datoms.{:tag :div, :attrs {:class "message-reaction", :title "clojure-spin"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "clojure-spin", :src "https://emoji.slack-edge.com/T03RZGPFR/clojure-spin/aea1be92af1f8856.gif"}, :content nil})} " 1")}
#2021-11-2312:47stuarthallowayBut to answer your specific question: While entid is a peer API, entity ids and keywords are both fundamental to Datomic's data model and present in all flavors of Datomic.#2021-11-2313:49Benjaminah sounds like datoms is what I need then thanks#2021-11-2315:07Ivan FedorovWas there any article on Datomic schema migrations?
Also, is it common to use ragtime?#2021-11-2318:14thumbnailWe use conformity, but I think ragtime is a fair bet nowadays#2021-11-2318:48Daniel JompheI asked the same question last summer and had this answer, which we used successfully.
https://clojurians.slack.com/archives/C03RZMDSH/p1627657775006800#2021-11-2318:49Daniel JompheYou can follow the thread's discussion by clicking on the link...#2021-11-2409:50Ivan FedorovThanks @UHJH8MG6S !
Thanks @U0514DPR7 !
Yeah I think that idempotent-tx by @U0P7ZBZCK is awesome!{:tag :div, :attrs {:class "message-reaction", :title "hugging_face"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 1")}
#2021-11-2410:48Ivan Fedorovfound this https://github.com/magnetcoop/stork
_
ouch! tied to on-prem version#2021-11-2317:09Benjaminmything/name mything/my-name what is better? I though 1 is bad because it hides the core function when I destructure with keys . Does it matter?#2021-11-2317:12potetm{n :mything/name}#2021-11-2317:12potetmdestructure like that^ and you can use the kw you prefer w/o fear of shadowing{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-11-2322:19bmaddyDoes anyone else see weird netty error messages when running Datomic in a docker container? I see the following when connecting (peer, dev protocol). Everything still seems to work just fine though so it's not an actual problem. I'd just be happy to make a little change if someone knows a quick fix.
Nov 23, 2021 3:13:08 PM org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector createConnection
ERROR: AMQ214016: Failed to create netty connection
java.net.UnknownHostException: datomic
at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797)
#2021-11-2411:16tvaughanWe've been running datomic on-prem (transactor, peer, and console) in containers for about two years and haven't seen this one before{:tag :div, :attrs {:class "message-reaction", :title "thanks3"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "thanks3", :src "https://emoji.slack-edge.com/T03RZGPFR/thanks3/868be8344387d7f0.gif"}, :content nil})} " 1")}
#2021-11-2417:25bmaddyThanks for mentioning that--it's nice to have it confirmed that it's on my end.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-11-2411:20BenjaminHi, I'm trying to connect to my presto server with metabase.
The sql cli seems to work fine. When I try to add the database in metabase the server logs this warning:
2021-11-24T12:15:24.922+0100 WARN http-worker-131 org.eclipse.jetty.util.SharedBlockingCallback Blocker not complete
#2021-11-2411:24Benjaminthis is what I fill into the fields. And these are the options that work for the sql-cli
./presto --server localhost:8989 \
--catalog airstats \
--schema airstats
#2021-11-2414:05BenjaminThere was no issue when I unticked ssl.
I think the issue might be that I run metabase with java17 and presto server with 11#2021-11-2510:30Ivan FedorovAny new developments regarding GDPR approaches since https://vvvvalvalval.github.io/posts/2018-05-01-making-a-datomic-system-gdpr-compliant.html? UPD: Talking about Cloud#2021-11-2512:03stuarthallowayexcision performance is much better now#2021-11-2512:05Ivan Fedorov@U072WS7PE thanks!
Any thoughts on if it’s better to excise data from cloud or store private fields in a separate KV storage?
__
UPD my bad – my question was supposed to be Cloud specific#2021-11-2520:14steveb8n+1. Cloud excision would make a complex solution much simpler for me too. Can we dare to hope?#2021-11-2612:40jaretI want to clarify since the question was modified to being Cloud specific. Excision is not available in Cloud. We are well aware of the desire for a feature in this space and are continuing to evaluate options. As of now the GDPR advice is the same. Specifically, an approach that I see customers using is to use "encryption + throw away the key", storing personal data outside of Datomic in another store. Referencing the specific user information as needed and throwing away the key when asked to comply with GDPR.#2021-11-2722:26steveb8n@U1QJACBUM thanks for the update. Using encryption creates all kinds of complexity e.g. where to store the key, query matching and sorting, etc. Is there a blog post or some guide on how to address these issues? As far as I’m aware, the simplest solution is to move GDPR data out of Datomic and into xtdb or similar. Running 2 dbs adds complexity as well so hoping to avoid that but I know that decision is coming for me. It’s stating the obvious but I figure it’s worth reiterating that this can force your customers away from Datomic.#2021-12-0819:24Ivar RefsdalI wrote a library to have finer control over excision:
https://github.com/ivarref/rewriting-history
This is for on prem only though.#2021-11-2607:50BenjaminIs there some cache to clear with the client api to clear aws creds? Restarting the program is a way but yea#2021-11-2703:42Drew VerleeThe documentation of the Custom transaction functions lists:
• atomic transformation functions in transactions
• integrity checks and constraints
• spec-based validators
• and much more - your imagination is the limit!
I'm i correct in understanding that only the first item listed "atomic transformation functions in transactions" is actually unique to transactions?
That is, you could do spec based validations on data before it's transacted but with a transaction function you can do during the transaction.#2021-11-2704:08emccueI think the second one is unique too#2021-11-2715:32Drew VerleeI couldn't find anything in the docs about it. I assume the just add a spec validation in the middle of the transaction function. The same way you could add any valid clojure code.#2021-11-2716:42BenjaminWhat is the minimal iam role for connecting and reading datomic? Is there such a role by default?
Ah now I see there is an admin and a readonly policy#2021-11-2719:11emccue@drewverlee As an example - you can verify a cardinality 1 relationship with a schema, but i don’t think you can validate a foreign key relationship without transaction functions#2021-11-2800:45Drew VerleeThat makes sense.#2021-11-2812:57tony.kayYou can't reliably validate anything that has a multi-datom dependency without atomicity. If you're validating it while something else is modifying it then your final transaction will result in potentially invalid data. You can use CAS to mitigate that as long as you CAS on the stuff you read as well as the stuff you write. I have a system where there is a central owner entity for each account. I find transactor functions hurt throughput too much when they contain a lot of validation, so I instead do an optimistic concurrency control where I CAS on a counter (no history) on that entity to increase it by one from what I read before starting validation. I'm essentially treating that counter like an account-level db lock. That ensures only one tx can run per account at a time even though most of the work is done outside the tx.#2021-11-2818:57Drew VerleeVery interesting, i'll have to give that some thought.#2021-11-2908:57BenjaminWhat is the correct way to schedule a lambda ion say every 30min? I tried to click it in "EventBridge" but it errs:
datomic.ion.lambda.handler.exceptions.Incorrect: datomic.ion.lambda.handler.exceptions.Incorrect
---
Is there a command to remove ions? ✅
---
where can I read on logging from inside an ion?#2021-11-2914:01Benjamindatomic.ion.lambda.handler.exceptions.Incorrect: datomic.ion.lambda.handler.exceptions.Incorrect
clojure.lang.ExceptionInfo: null {:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message nil}
at datomic.ion.lambda.handler$throw_anomaly.invokeStatic(handler.clj:25)
at datomic.ion.lambda.handler$throw_anomaly.invoke(handler.clj:21)
at datomic.ion.lambda.handler.Handler.on_anomaly(handler.clj:175)
at datomic.ion.lambda.handler.Handler.handle_request(handler.clj:205)
at datomic.ion.lambda.handler$fn__3856$G__3781__3861.invoke(handler.clj:68)
at datomic.ion.lambda.handler$fn__3856$G__3780__3867.invoke(handler.clj:68)
at clojure.lang.Var.invoke(Var.java:399)
at datomic.ion.lambda.handler.Thunk.handleRequest(Thunk.java:35)#2021-11-2914:50Daniel JompheIf you remove references to an ion from ion-config.edn, the next deployment will delete the orphaned lambda(s). As for you original question, I let other people answer it.#2021-11-2914:54BenjaminI see thanks#2021-11-2916:03Joe LaneYou can invoke a Lambda however you'd like. Event-Bridge (formerly CloudWatch Events) is a fine option for a scheduled "cron" task.#2021-11-2916:07Joe Lane@U02CV2P4J6S If you're still having trouble with this, you can follow the https://github.com/Datomic/ion-starter project, specifically, the section on https://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter/lambdas.clj for inspiration.#2021-11-2916:13Benjamin@U0CJ19XAM hm. Do you know if the event bridge has the correct permissions and such by default? I guess I'll try making a simple one work#2021-11-2916:14Joe LaneDatomic doesn't give any AWS services permissions to invoke Lambda Ions by default.#2021-11-2916:17BenjaminAh alright. Then I know I have to grant the permission/role somehow#2021-11-2916:22Joe LaneI'd follow https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-run-lambda-schedule.html but use the Lambda Ion instead of the javascript one they show you how to make. Specifically, you need to do the part (that I can't link to facepalm ) that starts with the phrase To grant the EventBridge service principal....{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2021-11-2916:31Benjaminthanks a lot that was the thing{:tag :div, :attrs {:class "message-reaction", :title "100"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💯")} " 1")}
#2021-11-2914:50Daniel JompheIf you remove references to an ion from ion-config.edn, the next deployment will delete the orphaned lambda(s). As for you original question, I let other people answer it.#2021-11-3002:43pinealanIf a datomic schema only has :db.cardinality/one attributes, is it fair to say it’s essentially a 6nf schema?#2021-11-3011:57BenjaminWould you use websockets / "real time applications" with datomic? Let's say I follow this https://medium.com/free-code-camp/real-time-applications-using-websockets-with-aws-api-gateway-and-lambda-a5bb493e9452 , make a http entry point ion and put connections data into datomic. What would you say to that? Example use case is a discord bot#2021-11-3013:47jaretWe had a customer create a detailed post on using websockets: https://forum.datomic.com/t/websocket-guide-wip/1916 that might be of some interest to you.{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2021-11-3016:23Drew VerleeI wrote that as part of my own going toy project. Most of the work is in understanding AWS websocket setups. That is, datomic cloud doesn't really add any overhead to the process.
I also haven't thought through storing the websocket sessions. Likely that should be done in a separate dynamo db table where it's easier to remove/expire them.#2021-11-3017:57Daniel JompheWhen we oriented ourserves towards WS, we had our dynamodb table for storing websocket sessions. We didn't see value in keeping them in our Datomic system of transactions.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-11-3018:52Drew Verlee@U0514DPR7 Did you by any chance do a write up on that?
I'm mostly curious about an overview of the session management in a couple typical cases. I imagine the overview is just...
1. client -> server : if create -> store in dynamo. If delete -> delete from dynamo. If message -> whatever
2. server -> client: if new data you want to send to potential clients then look up affected users, then lookup session keys in dynamo and then send message. #2021-11-3018:54Drew VerleeI want to keep a rules engine on the server that has a cache of the data any open session might care about. i'm curious if anyone has gone down that road.#2021-11-3018:55Daniel JompheHi Drew, I was not the one implementing it.
I helped making a POC through AWS API Gateway and made a way in our code to talk to DDB, but the actual session management was implemented by another developer.
You've outlined it quite well, though.#2021-12-0115:45Benjamin@U0DJ4T5U1 would you store such cache in dynamo, then?#2021-12-0117:27Drew Verlee@U02CV2P4J6S I'll revise my earlier statement. I would cache things as needed, the easiest (from a development perspective) option is to never cache. So re-creating the rete-network data on request each time is fine for my application currently. It has no users 🙂 .
Assuming perf did start to matter, and i did need to keep the rules network in memory (a cache of the db). Then It would be on the server likely just in a clojure atom.
From there it gets more complicated and it would be useful to know which access patterns specifically were slow or a bottleneck.#2021-12-0117:30Benjamininteresting. What is the server? Is that a http ion? Ah I searched the forum and suppose 1 way is to start some process when the namespace of the ion loads#2021-12-0121:54Daniel JompheOn the other hand you need to keep the connection-id to be able to send your response back to the ws client.#2021-12-0208:48BenjaminCan I do anything inside a transaction? Like making a web request to slack or something?#2021-12-0208:51Lennart BuitYou probably don’t want to; transacting is an intentionally serial process. So any significant work you do inside the transactor will block other transactions from going through.#2021-12-0209:00BenjaminI see. So maybe my code would do something like transact and afterwards trigger a lambda that makes the slack message?#2021-12-0209:19Lennart Buitmy aws-fu isn’t great, but that sounds reasonable{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-12-0214:56ghadihttps://docs.datomic.com/cloud/transactions/transaction-functions.html#creating{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-12-0210:23Benjaminis there a correct way to ssh tunnel into a compute group? https://docs.datomic.com/cloud/whatis/architecture.html#api-gateway here it says have an api gateway for client access. The goal would be to connect to a running nrepl process#2021-12-0213:05Daniel Jomphehttps://github.com/markbastian/replionhas been a great guide for this, although the procedure is much simpler since this summer's new Datomic Cloud.#2021-12-0213:07Daniel JompheThe new topology makes it easier since there no more is any bastion to go through, and the Datomic instances are publicly exposed to the Internet.#2021-12-0213:10Daniel JompheOpening the tunnel is as easy as:
ssh -i/<path-to-key>.pem -L 7000:
where the X vals are your compute instances public DNS.{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2021-12-0213:10Daniel Jomphe7000 assumes that's the port you serve nRepl on.#2021-12-0213:10Benjaminoh nice thanks a lot{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-3"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 1")}
#2021-12-0213:11Daniel JompheYou only need to adjust the security group on the compute instance to let pass the traffic, as shown in Replion's guide (but the guide shows you to do it on the Bastion instance and the compute instance IIRW, but now there is no bastion, only the compute instance to open up).#2021-12-0213:12Benjaminit's like an "inbound rule" on the security group right?#2021-12-0213:12Daniel JompheYes, let me confirm and screenshot.#2021-12-0213:14Benjaminwhich is the security group that counts for a query group stack? There is lambda and loadbalancer group#2021-12-0213:15Daniel Jomphe#2021-12-0213:16Daniel JompheThe one with -NodeSecurityGroup- in the name.{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2021-12-0213:16Benjaminwill try making it work 😄{:tag :div, :attrs {:class "message-reaction", :title "slightly_smiling_face"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙂")} " 1")}
#2021-12-0213:17Daniel Jomphegood luck! Things are errily silent when we miss a detail, but otherwise it's quite easy.#2021-12-0213:18Benjaminwhat is missing a detail? Ah you mean if something isn't working there is no warnings and such when trying this#2021-12-0213:21Daniel JompheApproximately in this order:
1. Network SSH p22: If your addresses are wrong, some error; if your path to the pem key is wrong, quick auth error
2. Network TCP p7000: If you open up the ports on the wrong SG or source CIDER, etc., IIRW connection will fail quickly with some error, or silently after a long timeout
3. nRepl service: If you forget to start your nRepl server on the appropriate port, long timeout IIRW
I could be well off in my predictions. I never can remember which response is caused by which detail missed.{:tag :div, :attrs {:class "message-reaction", :title "memo"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("📝")} " 1")}
#2021-12-0217:39Benjaminhaha it works bananadance so sick. Can even use cider debug instrumentation on lambdas :pogg:{:tag :div, :attrs {:class "message-reaction", :title "raised_hands::skin-tone-3"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 1")}
#2021-12-0220:39Daniel JompheNice, Benjamin! 🙂#2021-12-0214:29BenjaminIs there a ghetto fix way of using a custom (1.11alpha) in ions?#2021-12-0214:34Joe LaneNope#2021-12-0214:55Benjaminyea I fix it differently#2021-12-0215:56BenjaminWhat is the correct way to define a git sha in deps.edn?
When I do it like this
io.github.discljord/discljord
{:git/sha "f4942108038463b4c381c1ab152cdb2ae4863b38"
:sha "f4942108038463b4c381c1ab152cdb2ae4863b38"}
then datomic ion dev is happy, but clj is not
Error building classpath. git coord has both :sha and :git/sha for io.github.discljord/discljord
I fell like the issue is that ion-dev doesn't read the namespaced one. Says "sha missing'" when I only put :git/sha#2021-12-0216:19Alex Miller (Clojure team)ion stuff is still using the older tools.deps that doesn't understand :git/sha#2021-12-0216:19Alex Miller (Clojure team)but clj should still work with :sha#2021-12-0216:19Alex Miller (Clojure team)I guess you'll also need :git/url though#2021-12-0216:20Alex Miller (Clojure team):git/url + full :sha should work in both#2021-12-0216:46Benjamincan confirm
org.suskalo/discljord
{:git/url "
this worked 🎉#2021-12-0321:37jacekschaeI’m working on the last part of the Learn Datomic course and I encountered strange thing with {:op :push}
I have my `.clojure/deps.edn` as follows:
{:mvn/repos
{"datomic-cloud" {:url ""}
"cognitect-dev-tools" {:url ""}}
:aliases
{:ion-dev
{:deps {com.datomic/ion-dev {:mvn/version "1.0.294"}}
:main-opts ["-m" "datomic.ion.dev"]}}}
When I push with this deps.edn project using clj -A:prod:ion-dev '{:op :push}'
{:paths
["src/main" "src/resources"]
:mvn/repos
{"datomic-cloud" {:url ""}}
:deps
{org.clojure/clojure {:mvn/version "1.10.3"}
ring/ring {:mvn/version "1.9.4"}
integrant/integrant {:mvn/version "0.8.0"}
metosin/reitit {:mvn/version "0.5.15"}
clj-http/clj-http {:mvn/version "3.12.3"}
ovotech/ring-jwt {:mvn/version "2.2.1"}}
:aliases
{:dev
{:extra-paths ["src/dev"]
:extra-deps {com.datomic/dev-local {:mvn/version "0.9.235"}
integrant/repl {:mvn/version "0.3.2"}}}
:test
{:extra-paths ["src/test"]
:extra-deps {com.datomic/dev-local {:mvn/version "0.9.235"}
ring/ring-mock {:mvn/version "0.4.0"}
integrant/repl {:mvn/version "0.3.2"}}}
:prod
{:extra-deps {com.datomic/ion {:mvn/version "1.0.57"}
com.datomic/client-cloud {:mvn/version "1.0.117"}}}}}
I get following error
Downloading: com/datomic/ion/1.0.56/ion-1.0.56.jar from
{:command-failed "{:op :push}",
:causes
({:message
"Could not find artifact com.datomic:ion:jar:1.0.56 in central ()",
:class ExceptionInfo,
:data
{:lib com.datomic/ion,
:coord {:mvn/version "1.0.56", :deps/manifest :mvn}}})}
When i move the `:prod` :`extra-deps` to `:deps` it works and i can push, it creates the revision and everything is fine. Is there a way I can push this with :prod alias and not put the com.datomic/ion and com.datomic/client-clooud in :deps?#2021-12-0321:38xcenoI had a similar issue last year, back then it was a limitation or bug in tools.deps where aliases will be ignored on push (at least that's how I remember it)
Could as well be the same issue for you.#2021-12-0321:48jacekschaeThanks for the information. @U064X3EF3 are you aware of this, is this still the limitation?#2021-12-0321:55Alex Miller (Clojure team) Not positive, but probably#2021-12-0322:18jacekschaeokay … so what’s the recommendation here? Is this mentioned anywhere in the docs?#2021-12-0400:14Alex Miller (Clojure team)Really a question for @U1QJACBUM #2021-12-0400:44jarrodctaylorWe will look into this and get back to you @U8A5NMMGD{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-12-0321:42xcenoHas anybody ever used dtype-next in a datomic ion?
I'm trying to deploy my ion right now and I get an Syntax error macroexpanding at ... whenever clj comes across a dtype-next/->tensor call.
Here's the inner exception:
DatomicCoreAnomaliesException": {
"Via": [
{
"Type": "clojure.lang.Compiler$CompilerException",
"Message": "Syntax error macroexpanding at (redacted.cljc:136:6).",
"Data": {
"ClojureErrorPhase": "Execution",
"ClojureErrorLine": 136,
"ClojureErrorColumn": 6,
"ClojureErrorSource": "redacted.cljc"
},
"At": [
"clojure.lang.Compiler$InvokeExpr",
"eval",
"Compiler.java",
3711
]
},
{
"Type": "java.lang.ArrayIndexOutOfBoundsException",
"Message": "Index 0 out of bounds for length 0",
"At": [
"tech.v3.tensor.dimensions.global_to_local$elem_idx__GT_addr_fn",
"invokeStatic",
"global_to_local.clj",
34
]
}
],
and the apparently offending line: https://github.com/cnuernber/dtype-next/blob/master/src/tech/v3/tensor/dimensions/global_to_local.clj#L34#2021-12-0804:42Drew Verleewhat is dype-next?#2021-12-0804:44Drew Verleei don't see why not? thats more about that jvm then datomic's persistence layer.#2021-12-0804:46Drew Verleeso this is probably relevent
Support for JDK-8 through JDK-17+ - JDK-16 is no longer supported. For jdk-17 usage, please see project.clj for required flags.#2021-12-0414:58BenjaminWhat is the easiest way to have 2 lambdas with api gateway? Currently I'm going to implement a single http-direct ion and put routing there. :integration :api-gateway/proxy from the example looks promising but I don't get: 1) what the endpoint for it is 2) if I need to configure something manual, because here it says "older versions" https://docs.datomic.com/cloud/ions/ions-reference.html#web-lambda-proxies#2021-12-0513:52Benjaminok I figured out that aws api gateway is kinda parallel to doing routing myself. I'm guessing there isn't really any preference on either or does anybody have experiences to share?#2021-12-0923:09Jake ShelbyIt depends on what you're trying to accomplish - if you have several "prefix" routes, you can configure those in API gateway to go to different lambda ion handlers - API gateway can proxy everything to those prefixes, and your app can handle further nested routing from there. This is handy if you have a public part of your API and other parts that require different authentication from each other - API gateway will allow you to specify no authentication for one prefix, and specify a cognito pool authentication (for example) to another prefix; and then yet another authentication (like a lambda) for another prefix#2021-12-0714:26Benjaminare ? allowed in ident keywords? Because there is isComponent I'm guess it would follow clojure idiom, if it could?#2021-12-0714:38favilathis is inherited from on-prem, which made some effort to look Java-friendly.#2021-12-0714:53favilabut yes, idents are keywords and any legal clojure keyword is a legal datomic ident{:tag :div, :attrs {:class "message-reaction", :title "clj"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "clj", :src "https://emoji.slack-edge.com/T03RZGPFR/clj/acf97abdead7f2d9.png"}, :content nil})} " 1")}
#2021-12-0714:58Benjaminok yea#2021-12-0817:30seancorfieldJust popping in to make sure some Datomic folks see this SO question: https://stackoverflow.com/questions/70270296/datomic-free-failed-on-openjdk-17{:tag :div, :attrs {:class "message-reaction", :title "catjam"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "catjam", :src "https://emoji.slack-edge.com/T03RZGPFR/catjam/50c2b0ff9e925462.gif"}, :content nil})} " 1")}
#2021-12-0817:30seancorfield(not my Q so please answer the OP on SO)#2021-12-0908:22joshkhi've been working on some IAM policies to deny access to certain databases. something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor2",
"Effect": "Deny",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::system-storage-bucket/system-name/datomic/access/dbs/db/my-restricted-db/*"
}
]
}
any tips for locking down access further? some thoughts and questions:
• do developers need write access to <s3://storage-bucket/vals/*> ?
• do developers need write access to other related services such as DynamoDB?
• is it possible to restrict access to particular compute and query groups via IAM policies?
thanks!#2021-12-1014:54BenjaminFor ions is there a way to (push) deploy different configss to the same group? Use case is to have 2 group and e.g. multiple repos.#2021-12-1014:58Joe Lane@U02CV2P4J6S If I'm understanding your question right, you would make a *-app project which depends on your other two projects and has 1 config for both of them. An example of this is how we built https://github.com/Datomic/ion-event-example and then build https://github.com/Datomic/ion-event-example-app later on, showing how we could have multiple ion "apps" or "libraries" composed together.{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2021-12-1015:00BenjaminIndeed that'd be it#2021-12-1113:44hadilsI have a Lambda Datomic Ion. I am trying to use cognitect.aws.api for the API :apigwmanagenebtapi When I do a :GetConnection it results in this error: : Name or service not known. I don’t know how to resolve this. Does anyone have some insight to this/ Thanks!#2021-12-1115:47mfikesI can't DNS resolve http://my.datomic.com ; wondering if this is just me#2021-12-1115:50mfikesGiven the two comments above, I'm wondering if there are general DNS outages going on.#2021-12-1116:00jaret@mfikes we're rolling out a new version of http://my.datomic.com. Apologies we are working on the issue right now.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-12-1116:09jarethttps://forum.datomic.com/t/my-datomic-com-under-maintenance-dec-11-2021/2012#2021-12-1116:47jaretMaintenance should be complete. Please let us know if you encounter any issues.#2021-12-1117:21mfikes@jaret Thanks! Looks like DNS might be good, but I'm seeing a 503 when attempting to pull the artifact
Caused by: org.eclipse.aether.resolution.ArtifactResolutionException: Could not transfer artifact com.datomic:datomic-pro:pom:1.0.6165 from/to (): status code: 503, reason phrase: Service Unavailable (503)
#2021-12-1117:24Robert A. RandolphWe're looking into this.#2021-12-1117:22mfikesIf you go to in a browser, it renders {"message":"Service Unavailable"}#2021-12-1117:24jarrodctaylorThanks @mfikes we are still getting everything wired back. Will post back where when fully resolved.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-12-1120:03michele mendeld/datoms for :aevt and :avet looks the same
(d/datoms db {:index :aevt :components [:inv/color]})
(d/datoms db {:index :avet :components [:inv/color]})
gives
(#datom[87960930222155 74 "blue" 13194139533319 true])
(#datom[87960930222155 74 "blue" 13194139533319 true])
Shouldn't :avet be
(#datom[87960930222155 "blue" 74 13194139533319 true])
#2021-12-1120:10favilaThe index refers to sort order not the structure of the datoms returned#2021-12-1120:11favilaThe datoms are always he same#2021-12-1120:15michele mendelI understand, but has the eavt, aevt and avet their own index? Looking at the file system, I only see log.idx.#2021-12-1120:17michele mendelAnd why do we get the whole color entity (including type and cardinality) for :eavt ?
(d/datoms db {:index :eavt :components [:inv/color]}))
(#datom[74 10 :inv/color 13194139533318 true]
#datom[74 40 23 13194139533318 true]
#datom[74 41 35 13194139533318 true])#2021-12-1200:14favilaTo your first question: eavt etc are the indexes. Datomic indexes are covering indexes (ie contain all the data they index—they are not merely pointers to a shared pool). Also the indexes are on disk as blocks(nodes) arranged in a b+tree like structure, there is not a 1-1 correspondence to files#2021-12-1200:15favilaTo your second: your datom call is asking for the portion of eavt where all E match :inv/color, which in your db resolves to entity 74. So this is exactly what you asked for#2021-12-1200:15favilaYou are getting the datoms where E is the color attribute itself#2021-12-1200:17favilaThe first datom in that list is establishing the db ident for the entity, [74 :db/ident :inv/color TX true]#2021-12-1200:17favila10 = the db/ident attribute#2021-12-1206:21michele mendelThanks!#2021-12-1120:09ghadiEach datom itself doesn't change, it's the iteration order of all datoms that changes between indices @michelemendel #2021-12-1121:18jarrodctaylor@mfikes service at my.datomic should be restored.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-12-1215:54Robert A. RandolphWe will be continuing to perform maintenance tasks on My Datomic today.
There will be approximately 60 minutes of total downtime with intermittent availability over the next 3 hours.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-12-1217:27Robert A. RandolphMy Datomic downtime is over. There may be intermittent function over the next couple hours while we continue to work on the application.
Please ping me or @jarrodctaylor if you encounter any problems.
Thank you!#2021-12-1309:56wegi@audiolabs It seems that an artifact is now missing, when trying to resolve deps for com.datomic/datomic-pro com.datomic:memcache-asg-java-client:jar:1.1.0.31 can not be found#2021-12-1310:16n2oError building classpath. Could not find artifact com.datomic:memcache-asg-java-client:jar:1.1.0.30 in central ()
with datomic-pro com.datomic/datomic-pro {:mvn/version "1.0.6316"}
and
Downloading: com/datomic/datomic-pro/1.0.6344/datomic-pro-1.0.6344.jar from
Error building classpath. Could not find artifact com.datomic:memcache-asg-java-client:jar:1.1.0.31 in central ()
with uncached com.datomic/datomic-pro {:mvn/version "1.0.6344"}#2021-12-1312:16Robert A. Randolph@U1PCFQUR3 thank you, I will look into this.#2021-12-1312:27jaret@U1PCFQUR3 and @U49RJG1L0 memcache-asg requires your http://my.datomic.com creds. As of the latest release the jar is also included with the Datomic download. You can run bin/maven-install within the directory for https://docs.datomic.com/on-prem/changes.html#1.0.6344 to install to your local maven repository.#2021-12-1312:28n2ohm yes i thought about it, but in the past this was not necessary and more comfortable 😄#2021-12-1312:29n2oPreviously, we did not need to download and install the complete datomic db for the CI. We provide the credentials in our project’s CI.#2021-12-1312:39n2oHm, a different error occurs:
λ clj -A:test
Downloading: org/clojure/clojure/maven-metadata.xml from cloud-maven
Downloading: com/datomic/memcache-asg-java-client/1.1.0.31/memcache-asg-java-client-1.1.0.31.pom from cloud-maven
Downloading: com/datomic/memcache-asg-java-client/1.1.0.31/memcache-asg-java-client-1.1.0.31.jar from cloud-maven
Error building classpath. Could not find artifact com.datomic:memcache-asg-java-client:jar:1.1.0.31 in central ()
#2021-12-1312:46n2oI built a minimal repo for this: https://github.com/n2o/datomic-minimal-bug#2021-12-1312:47n2oThe error occurs on two dev machines, both set up with credentials for http://my.datomic.com and everything worked the complete year as expected, but since today / after your update the error exists.#2021-12-1312:50Robert A. Randolphok thank you! I'll be investigating.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-12-1312:51Robert A. Randolph@U1PCFQUR3 do your credentials in your maven settings.xml match what's in http://my.datomic.com currently?#2021-12-1312:57n2oYes, I can send you my account name via PM if you want to check the logs or similar.#2021-12-1313:24Robert A. RandolphWe've found the issue and are working on a fix.#2021-12-1313:29n2oUuuuh great, thanks 👍#2021-12-1316:20Robert A. Randolph@U1PCFQUR3 I was able to reproduce using your repro, and have deployed a fix.{:tag :div, :attrs {:class "message-reaction", :title "tada"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🎉")} " 1")}
#2021-12-1316:21Robert A. Randolphcan you please retest and let me know#2021-12-1316:59n2oYes, it works ✅ 🥳 thanks #2021-12-1313:07mfikes@audiolabs I might be seeing a different issue with an internal server error
api_1 | Caused by: org.eclipse.aether.resolution.ArtifactResolutionException: Could not transfer artifact com.datomic:datomic-pro:pom:1.0.6269 from/to (): status code: 500, reason phrase: Internal Server Error (500)
{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2021-12-1313:08Robert A. RandolphApproximately what time did this occur?#2021-12-1313:08mfikesWithin the last 10 minutes.#2021-12-1313:09mfikesI'm trying again now to see if it is still happening...#2021-12-1313:11mfikesYeah, here is a gist of a repro with the full stack https://gist.github.com/mfikes/48a66956e39ab928f7fa289d7c42aae1#2021-12-1316:39Robert A. RandolphAre you still encountering this error?#2021-12-1316:52mfikesYes, just heard about it from another team member at Vouch; will confirm again myself#2021-12-1316:56mfikesJust tried again and getting 500s https://gist.github.com/mfikes/e9e77d75d9b49ddde420b17bf692b8db#2021-12-1317:01Robert A. Randolph@U04VDQDDY what are you running to encounter this? What do your deps look like?#2021-12-1317:04mfikesIn :mvn/repos we have a map entry with "" {:url ""}}#2021-12-1317:04mfikesLooking for other places where we actually refer to it...#2021-12-1317:11mfikesWe also have the usual stuff in our .m2/settings.xml like
<settings>
<servers>
<server>
<id></id>
<username>${MYDATOMIC_USERNAME}</username>
<password>${MYDATOMIC_PASSWORD}</password>
</server>
</servers>
</settings>#2021-12-1317:20mfikesSorry, took a little while to find it @audiolabs . We have a :deps map entry like
com.datomic/datomic-pro {:mvn/version "1.0.6269"}#2021-12-1317:23Robert A. RandolphWe found an error in the logs that appear to match your exception, however all other downloads for that version are proceeding correctly. I'm working through understanding what the differences are now.#2021-12-1317:23mfikesThere must be more to this... as I can't repro with a simple deps file#2021-12-1317:24mfikesThis is successfully downloading the artifact for me from my desktop
{:deps {com.datomic/datomic-pro {:mvn/version "1.0.6269"}}
:mvn/repos {"" {:url ""}}}#2021-12-1317:25mfikesBut the error occurs in the bowels of our Docker setup... seeing if I can figure out more#2021-12-1317:25mfikesFWIW the artifact downloaded successfully in Docker a few times over the weekend, and the internal server error ultimately came back, leaving us where we are now#2021-12-1317:27Robert A. RandolphWe deployed this current version around 11am eastern Sunday. Did you have successful downloads after that point?#2021-12-1317:27mfikesI want to say that from that time afterwards things were failing for me, but I'm not 100% sure.#2021-12-1317:41Robert A. RandolphIt appears that this may be an http request, not https. We only support https now#2021-12-1317:41Robert A. Randolphit may be unrelated, but it will be an issue#2021-12-1317:42Robert A. RandolphHowever I'm adding more logging to be certain#2021-12-1318:30Robert A. RandolphWe've identified an issue with head requests on files in the maven repo. If you can turn off head requests it should work. Meanwhile we're working towards a solution.#2021-12-1318:56favila+1 here, I’m also getting this#2021-12-1318:56favilaCould not transfer artifact com.datomic:datomic-pro:jar:1.0.6269 from/to (): Failed to transfer file with status code 500
#2021-12-1318:56favilaon something that definitely worked before#2021-12-1319:12Robert A. Randolph@U09R86PA4 can you confirm that this is a head request on the file that's failing?#2021-12-1319:14Robert A. Randolphmay work now, or very shortly, as we deployed a fix for head requests.#2021-12-1319:19favilathis is through lein, so I’m not sure what it’s retrieving. Now I get a different status code (204)#2021-12-1319:20favilaCould not transfer artifact com.datomic:datomic-pro:jar:1.0.6269 from/to (): Failed to transfer file with status code 204
#2021-12-1319:22mfikesAhh good find. HEAD is in the stacktrace#2021-12-1319:22mfikeshttps://github.com/elastic/java-langserver/blob/master/org.elastic.jdt.ls.core/src/org/eclipse/aether/transport/http/HttpTransporter.java#L239#2021-12-1319:24Robert A. Randolph@U04VDQDDY is it working for you?#2021-12-1319:26mfikesIt appears so @audiolabs thanks... I think Vouch is back up again 🙂{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-12-1319:27Robert A. Randolph@U09R86PA4 we're looking into Lein now. We've always returned 204 (which should be the correct status code). So there is another issue somewhere.#2021-12-1319:42mfikesConfirmed that Vouch is indeed back up (saw our server make it through to runtime, and also got a confirm from another Vouch team member). Thanks for the fast response @audiolabs!{:tag :div, :attrs {:class "message-reaction", :title "tada"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🎉")} " 1")}
#2021-12-1319:52Robert A. Randolph@U09R86PA4 we're unable to reproduce issues with lein. Could you start a new message/thread with information about your configuration?#2021-12-1414:42favilaThe issue eventually went away#2021-12-1414:43favilaI have a new issue though! yay#2021-12-1315:10conanIs there any risk to Datomic from the log4j vulnerability?{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 3")}
#2021-12-1315:36jaret@conan We do not use log4j. We do not have dependency on log4j. We include a bridge log4j-over-slf4j which is only included for customers who use log4j. Therefore there is no datomic vulnerability here unless you introduced log4j in your app.
If you want to learn more the best resource is here: http://www.slf4j.org/log4shell.html{:tag :div, :attrs {:class "message-reaction", :title "100"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💯")} " 2")}
#2021-12-2315:20Aleh Atsman@U1QJACBUM but datomic image with AMI id i-0ccb21ac99b06cf35 uses java-11-amazon-corretto and java-11-amazon-corretto-headless and looks like those packages are vulnerable#2021-12-2315:23Aleh Atsman@U1QJACBUM the ami image creation date is "CreationDate": "2021-09-22T14:33:04.000Z"#2021-12-2315:25Aleh Atsman@U1QJACBUM I only guess that these AMI have to be rebuild with patched version of java-11-amazon-corretto and java-11-amazon-corretto-headless#2021-12-2317:26jaretHi @aleh_atsman We spent some time looking into this because we had another customer report this problem this morning from AWS scanner (Amazon inspector) and after reviewing everything in detail we do not believe we have exposure here and the issue is with the scanner.
We do not use Log4j or JNDI, but end-user applications might (such as via ions). If you are such a user, you should upgrade your Log4j version (or find an alternative logging solution such as https://docs.datomic.com/cloud/ions/ions-monitoring.html#overview).
In general, I think the AWS scanner (Amazon inspector) is not correct. It is contradicting the security bulletin that Amazon wrote.
Details:
• Per [V5] of the Log4J https://aws.amazon.com/security/security-bulletins/AWS-2021-006/ the Corretto JVMs do not have vulnerabilities related to Log4J. The latest Amazon Corretto released October 19th is not affected by CVE-2021-44228 since the Corretto distribution does not include Log4j. We recommend that customers update to the latest version of Log4j in all of their applications that use it, including direct dependencies, indirect dependencies, and shaded jars.
• The Corretto HotPatch, mentioned in the https://aws.amazon.com/blogs/security/open-source-hotpatch-for-apache-log4j-vulnerability/ , the top of [https://aws.amazon.com/security/security-bulletins/AWS-2021-006/] of the AWS security bulletin for log4j, and found here on https://github.com/corretto/hotpatch-for-apache-log4j2/ modifies running JVMs to completely disable the use of JNDI. Again, we don't use JNDI. The lack of applying this patch does not implicitly make the Corretto JVMs vulnerable to the log4j CVE. (It has however caused headaches and hours of lost sleep for https://github.com/corretto/hotpatch-for-apache-log4j2/issues/43)
• None of the agents or daemons from AWS are written in java#2021-12-1315:36jaret@conan We do not use log4j. We do not have dependency on log4j. We include a bridge log4j-over-slf4j which is only included for customers who use log4j. Therefore there is no datomic vulnerability here unless you introduced log4j in your app.
If you want to learn more the best resource is here: http://www.slf4j.org/log4shell.html{:tag :div, :attrs {:class "message-reaction", :title "100"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💯")} " 2")}
#2021-12-1315:41jaretBecause I imagine this question will come up with other customers I went ahead and created a thread with the answer to log4j here:
https://forum.datomic.com/t/datomic-and-log4j-cve-2021-44228-no-vulnerability-in-datomic/2013{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 8")}
#2021-12-1321:13Dimitar UzunovThis includes both Datomic On-Prem and Cloud right?#2021-12-1321:14jaretYes.{:tag :div, :attrs {:class "message-reaction", :title "datomic"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "datomic", :src "https://emoji.slack-edge.com/T03RZGPFR/datomic/ac50082444f328fc.png"}, :content nil})} " 2")}
#2021-12-1318:25Benjaminfor ions is getting the database always fast (less than 100ms)? I'm wondering if I should call d/connect every few minutes or sth but that seems a bit cargo culty 😅#2021-12-1318:27BenjaminIt's just that my app might hit a timeout and a user get's "interaction failed"
---
I'll just make those interactions handle things potentially taking a bit now#2021-12-1318:48kennyWhat problem are you trying to solve?#2021-12-1318:50BenjaminI had a bug where something timed out and one of the things it does is transacting to datomic.
Not sure that was the thing that took long. Or if there was something else wrong. 😅#2021-12-1318:51Joe Lane@U02CV2P4J6S Once you have a connection you don't need to "refresh" it.{:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 1")}
#2021-12-1318:52Benjaminyea I'm trying to get it for each unit of work.
Use case is a discord bot where say every 20s some command runs.#2021-12-1318:53kennyIn general, all remote calls should be wrapped in a retry. #2021-12-1318:54kennyYou should figure out which call is failing: transact or connect.
{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-12-1318:54Benjaminah I wasn't doing it for the transactions yet, will put it#2021-12-1318:57kennySeparately, we cache all calls to connect. Unclear if it is recommended or not: https://ask.datomic.com/index.php/569/should-you-cache-d-connect-calls. It doesn't seem to have any impact. {:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-12-2317:26jaretHi @aleh_atsman We spent some time looking into this because we had another customer report this problem this morning from AWS scanner (Amazon inspector) and after reviewing everything in detail we do not believe we have exposure here and the issue is with the scanner.
We do not use Log4j or JNDI, but end-user applications might (such as via ions). If you are such a user, you should upgrade your Log4j version (or find an alternative logging solution such as https://docs.datomic.com/cloud/ions/ions-monitoring.html#overview).
In general, I think the AWS scanner (Amazon inspector) is not correct. It is contradicting the security bulletin that Amazon wrote.
Details:
• Per [V5] of the Log4J https://aws.amazon.com/security/security-bulletins/AWS-2021-006/ the Corretto JVMs do not have vulnerabilities related to Log4J. The latest Amazon Corretto released October 19th is not affected by CVE-2021-44228 since the Corretto distribution does not include Log4j. We recommend that customers update to the latest version of Log4j in all of their applications that use it, including direct dependencies, indirect dependencies, and shaded jars.
• The Corretto HotPatch, mentioned in the https://aws.amazon.com/blogs/security/open-source-hotpatch-for-apache-log4j-vulnerability/ , the top of [https://aws.amazon.com/security/security-bulletins/AWS-2021-006/] of the AWS security bulletin for log4j, and found here on https://github.com/corretto/hotpatch-for-apache-log4j2/ modifies running JVMs to completely disable the use of JNDI. Again, we don't use JNDI. The lack of applying this patch does not implicitly make the Corretto JVMs vulnerable to the log4j CVE. (It has however caused headaches and hours of lost sleep for https://github.com/corretto/hotpatch-for-apache-log4j2/issues/43)
• None of the agents or daemons from AWS are written in java#2021-12-1409:18andersquick question: does datomic on-prem HA require two separate license? paraphrasing from https://docs.datomic.com/on-prem/operation/ha.html
> Running HA requires the use of a paid Datomic Pro license key in both transactors.#2021-12-1412:41jaretNo. A single license covers a system (two transactor machines for HA) + associated peers and that also includes needed staging/testing/dev environments.#2021-12-1412:41andersthanks, @U1QJACBUM 👍#2021-12-1410:18vlaaad(time
(db/q '[:find ?c1
:in $ %
:where [?rc :concept/id "SgNH_hag_n9D"]
[?c1 :concept/type "skill"]
(edge ?rc "related" ?c2 _)
[(= ?c1 ?c2)]]
(db/get-db 1)
relation/rules))
;; "Elapsed time: 447.703084 msecs"
(time
(db/q '[:find ?c
:in $ %
:where [?rc :concept/id "SgNH_hag_n9D"]
[?c :concept/type "skill"]
(edge ?rc "related" ?c _)]
(db/get-db 1)
relation/rules))
;; "Elapsed time: 6532.150235 msecs"#2021-12-1410:18vlaaadI thought unification will make datomic query execution engine to do less work, not more…#2021-12-1410:20vlaaadbut here I perform unification “manually” with (= ?c1 ?c2) and it performs 14 times faster then with unification…#2021-12-1410:21vlaaadis it some weird edge case that has something to do with the rule (not sure if it’s needed here, it’s somewhat big)?#2021-12-1414:46favilaMy http://my.datomic.com page seems a bit incoherent now. It shows my current license (good until may 2022), but the order history is extremely incomplete (only transaction shown is from 2015) and I don’t see any datomic downloads after 1.0.6165 and I get a 401 using maven access for newer versions of datomic-pro#2021-12-1414:58Robert A. RandolphWe're aware of the issue and working on it.#2021-12-1415:30jaret@favila I believe we resolved the issue with https://my.datomic.com please let us know if you are still having issues or see any other oddities.#2021-12-1415:59tvaughan$ curl -iL https://my.datomic.com
HTTP/2 302
date: Tue, 14 Dec 2021 15:59:17 GMT
content-type: text/html;charset=utf-8
content-length: 0
location: http://my.datomic.com:443/login
server: Jetty(9.4.41.v20210516)
set-cookie: my-datomic=YzJ28XgVDb9owvW3n3PsaI%2FiJxYW%2Bwg%2FN2HuNa%2BU%2BFy4pAZr0fyE6Wv0FGl7ZQtb%2B2n%2BVW4XENjY3EFCLFutKXKc%2BU7YZjF%2BwNH%2B3iF5ak76cuhkVQLLNmF8HUlWLYa13HEmm%2BNVZLVtwr9QxzC4b5et77%2FRXr8C41BkSDWEulc%3D--6NcFiZInwMk14ntSYHAxPOR2tbeaCzGRmmmqc1mD4i4%3D;Path=/;HttpOnly
x-frame-options: DENY
x-xss-protection: 1; mode=block
x-download-options: noopen
strict-transport-security: max-age=31536000; includeSubdomains
x-permitted-cross-domain-policies: none
x-content-type-options: nosniff
content-security-policy: object-src none
apigw-requestid: KWLVViAaIAMES0Q=
HTTP/1.1 400 Bad Request
Server: awselb/2.0
Date: Tue, 14 Dec 2021 15:59:17 GMT
Content-Type: text/html
Content-Length: 220
Connection: close
<html>
<head><title>400 The plain HTTP request was sent to HTTPS port</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<center>The plain HTTP request was sent to HTTPS port</center>
</body>
</html>#2021-12-1416:00tvaughanNote: location: #2021-12-1416:20jaretThanks @U0P7ZBZCK looks like we are missing a redirect to https. Will investigate.#2021-12-1416:21jaretits not possible to login over HTTP.#2021-12-1416:21jaretSite should be accessible from https://my.datomic.com#2021-12-1416:22tvaughanWorks for me now. Thanks#2021-12-1416:51jaretWe're aware that we are still experiencing issues with https://my.datomic.com. Working to resolve the issue.#2021-12-1417:29michele mendelWhy does ?added in this query return true or false , and not :db/add or :db/retract ?
(d/q '{:find [?e ?a-name ?v ?added]
:in [$]
:where [[?e ?a ?v _ ?added]
[?a :db/ident ?a-name]]}
(d/history (d/db conn)))
#2021-12-1417:31Joe LaneBecause the 5th position of a datom is a boolean, not the op used.#2021-12-1417:34michele mendelYes, but why isn't the op used?#2021-12-1417:40Joe LaneAlso @michelemendel, you probably want to do something like this.
(let [the-db (d/db conn)
the-h-db (d/history the-db)
a->a-name (into {}
(d/q '[:find ?a ?a-name
:where [?a :db/ident ?a-name]]
the-db))]
(map (fn [d] (update d 1 a->a-name)) (seq (d/datoms the-h-db :eavt))))#2021-12-1418:02michele mendelI had to change it a little to make it work, but what was the intended collection to map?#2021-12-1418:10Joe LaneWhat?#2021-12-1418:12michele mendelThe map has no input collection
(map
(fn [d] (update d 1 a->a-name (d/datoms history-db {:index :eavt})))
Shouldn't there be a collection here?
)#2021-12-1418:19michele mendelMaybe you wanted to pick out the keys in a->a-name from the history.
(let [db (d/db conn)
history-db (d/history db)
a->a-name (->> (d/q '[:find ?a ?a-name
:where [?a :db/ident ?a-name]]
db)
(into {}))]
(->> a->a-name
(map (fn [d] (update d 1 (seq (d/datoms history-db {:index :eavt})))))))
This doesn't work, though.#2021-12-1418:47Joe Lane(let [the-db (d/db conn)
the-h-db (d/history the-db)
a->a-name (into {}
(d/q '[:find ?a ?a-name
:where [?a :db/ident ?a-name]]
the-db))]
(map (fn [d] (update d 1 a->a-name))
(seq (d/datoms the-h-db :eavt))))#2021-12-1417:41Joe LaneOtherwise you're going to load the entire history of the database into memory#2021-12-1418:19mfikesWe've started seeing 500s again from http://my.datomic.com; opening a new thread#2021-12-1418:20mfikesPreviously HEAD requests were causing it.#2021-12-1418:20mfikesHere is a gist of the current behavior https://gist.github.com/mfikes/cf048db2beca5182c4db3ed5cdf0c5ce#2021-12-1418:23mfikesOn the surface, the fact that peek is appearing near the end of the stacktrace is consistent with the previous issue with HEAD requests#2021-12-1418:29tvaughanI'm seeing the same problem I reported earlier too
$ curl -iL https://my.datomic.com
HTTP/2 302
date: Tue, 14 Dec 2021 18:27:56 GMT
content-type: text/html;charset=utf-8
content-length: 0
location: http://my.datomic.com:443/login
Even though the initial request is via https, the redirect is via http using port 443#2021-12-1418:36Robert A. RandolphWe're looking into it, thank you for the reports!#2021-12-1419:16mfikesFWIW, it just worked for me.#2021-12-1419:40jarrodctaylorWe think the issues have been resolved. Thanks for hanging with us and reporting issues. Let us know if anything else pops up.#2021-12-1609:34andersany hopes of adding support for aws c5/c6 ec2 instances to datomic-pro on-prem?#2021-12-1613:08jaretIn Datomic on-prem you can roll your own CFT. The CFT we provide is purely for development convenience.#2021-12-1613:18anders👍#2021-12-1612:18Ivan FedorovIs this the right place to ask about ions?
I was wondering if there are configuration options other than listed in the https://docs.datomic.com/cloud/ions/ions-reference.html#parameters. E.g. can I upload my own config-staging.edn?#2021-12-1806:17Drew Verleeare you talking about the configuration the ions use for functionality like :allow or :lamdas ? what were you looking to add?#2021-12-1808:20BenjaminYou can put it in resources/config-staging.edn and retrieve it with (io/resource "config-staging.edn")#2021-12-1912:36Ivan Fedorov@U02CV2P4J6S yes, I somehow overlooked that.
I can check ions env map and then dance from there, thanks! I initially was thinking about bringing in an outside file and placing it in the project root directory.#2021-12-1912:41Ivan Fedorov@U0DJ4T5U1 I’m trying to understand how Ions work and if I can bring an outside configuration file for my app and place it into a project root. I’m not the dev-ops in my team, and I don’t have understand of how Ions nodes are composed.
I think I was looking for this
https://docs.datomic.com/cloud/whatis/architecture.html#nodes
But I don’t yet understand if I can ssh into any node.#2021-12-1918:32Drew Verlee@U0A5V8ZR6 why ssh into a node?
The general workflow i have used in my very limited case has been that i don't ssh to the nodes. For the clojure part, i work locally. For integrations like to web api gateway, i have to read lot of docs and look at logs.#2021-12-1614:16xcenoI can't seem to figure out how to pass a "blank" as an argument into a query. Here's what I'd like to do:
(d/q '[:find (pull ?edge *)
:in $ ?direction ?uuid
:where
[?e :my.graph.node/id ?uuid]
[?edge ?direction ?e]]
db direction-kw uuid)
In this query I'd like to set direction-kw to either some keyword or _ (a https://docs.datomic.com/cloud/query/query-data-reference.html#blanks)
I tried to set direction-kw to nil and '_ but that only ends in exceptions. What am I doing wrong?#2021-12-1614:24favila_ is a syntatic construct, not a value#2021-12-1614:25favilato have an “optional” binding, you need to use either a sentinel value and rules for sentinel vs non-sentinel, or you need to construct the query to include or omit the optional binding#2021-12-1614:29favilaconstruction example
(let [base {:query {:find '[(pull ?edge *)]
:in '[$ ?uuid]
:where '[[?e :my.graph.node/id ?uuid]]}
:args [db uuid]}
full (cond-> base
(some? direction-kw)
(-> (update-in [:query :in] conj '?direction)
(update-in [:query :where] conj '[?edge _ ?e])
(update :args conj direction-kw)))]
(d/query full))#2021-12-1614:34favilaActually, before I go further, I think you may not want this. How do you know that [?edge _ ?e] is matching what you consider to be a “direction” attribute?#2021-12-1614:34favilaI think you should not do anything I said and instead enumerate all direction attributes#2021-12-1614:34favila(d/q '[:find (pull ?edge *)
:in $ [?direction ...] ?uuid
:where
[?e :my.graph.node/id ?uuid]
[?edge ?direction ?e]]
db [direction-kw] uuid)
#2021-12-1614:35favilaand the “blank” case is
(d/q '[:find (pull ?edge *)
:in $ [?direction ...] ?uuid
:where
[?e :my.graph.node/id ?uuid]
[?edge ?direction ?e]]
db [:left :right] uuid)#2021-12-1614:36favilanote only the input changed. (I’m making up direction kws, I don’t know what your set is)#2021-12-1615:23xcenoOhh I didn't think of binding a collection like this at all. Thank you, that actually makes way more sense for what I want to do!#2021-12-1614:29borkdudeWhich dependency is supposed to provide datomic.api and datomic.function if you're using dev-local?
https://github.com/fulcrologic/fulcro-rad-datomic/blob/develop/src/main/com/fulcrologic/rad/database_adapters/datomic.clj#L13-L14#2021-12-1614:39souenzzohttps://mvnrepository.com/artifact/com.datomic/datomic-free
or com.datomic/datomic-pro#2021-12-1614:43souenzzohttps://forum.datomic.com/t/requesting-feedback-on-dev-local-getting-started/1608/5
An easy to use experience, like com.datomic/datomic-free "0.9.5697" for beginners/learners/demos/tutorials. No one will create a account, download stuff, install, configure, etc… just for learn a new technology (we are clojure devs. we learn at REPL).
#2021-12-1614:44borkdudeif datomic-free is still required then I'm at a dead end#2021-12-1614:45borkdudeunless #datomic can provide one that is linked against a newer version of clojure#2021-12-1614:45borkdude\cc @U1QJACBUM#2021-12-1614:47souenzzoyou can create a account at https://my.datomic.com/, do credentials setup and etc to access datomic-pro#2021-12-1615:05stuarthallowayHi @U04V15CAJ! What are you trying to do?#2021-12-1615:08borkdudeHey Stuart! I'm helping out @U6VPZS1EK to compile his app to native with #graalvm. We are running into an issue with the locking macro which was fixed in clojure 1.10.2. But datomic-free is still AOT-ed against an older version of clojure so we're still running into the "unbalanced monitor" error from the graalvm bytecode verifier.#2021-12-1615:13borkdudeWe might be able to do this without datomic though, I have a meeting with him today so I'll report back here after that.#2021-12-1615:39Daniel JompheHi Michiel! Just to be sure, you mentioned dev-local in your original question.
Dev-local is for Datomic Cloud only, not for Datomic On-Prem.
From the look of your most recent messages (you use datomic-free), dev-local is not applicable to your situation.
Let's hope Cognitect can update datomic-free with a more recent AOT.#2021-12-1617:00stuarthalloway@U04V15CAJ We do not support any version of Datomic on #graalvm. I do not know what the issues would be (you are much experienced there) but I suspect that there would be many. Am I wrong about that?#2021-12-1617:49borkdude@U072WS7PE It depends. I do not know what is in datomic :)#2021-12-1617:49borkdudebut this was the first issue I bumped into#2021-12-1617:50borkdudethere are various other datalog-like databases that work fine with graalvm native, datalevin being one of them#2021-12-1617:51stuarthallowayLots of stuff, written before there was #graalvm to worry about. It would be interesting to know if dev-local would work in #graalvm as a proxy for the problems you would encounter, even if dev-local is not suitable for the current use case.#2021-12-1617:51borkdudealright, I can try to make that work. do you by any chance know against which clojure version dev-local is compiled?#2021-12-1617:52stuarthallowayI will have to check#2021-12-1617:53stuarthallowaySomething recent for sure.#2021-12-1617:54borkdudeif it's newer or equal to 1.10.2 then it's good#2021-12-1619:14borkdude@U072WS7PE I did a little bit of trying with the below program:
(ns devlocal.main
(:require [datomic.client.api :as d]
[datomic.dev-local.impl])
(:gen-class))
(def movie-schema [{:db/ident :movie/title
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/doc "The title of the movie"}
{:db/ident :movie/genre
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/doc "The genre of the movie"}
{:db/ident :movie/release-year
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one
:db/doc "The year the movie was released in theaters"}])
(def first-movies [{:movie/title "The Goonies"
:movie/genre "action/adventure"
:movie/release-year 1985}
{:movie/title "Commando"
:movie/genre "thriller/action"
:movie/release-year 1985}
{:movie/title "Repo Man"
:movie/genre "punk dystopia"
:movie/release-year 1984}])
(defn -main [& _args]
(let [client (d/client {:server-type :dev-local
:system "dev"
:storage-dir :mem})
_ (d/create-database client {:db-name "movies"})
conn (d/connect client {:db-name "movies"})
_ (d/transact conn {:tx-data movie-schema})
_ (d/transact conn {:tx-data first-movies})
db (d/db conn)
all-titles-q '[:find ?movie-title
:where [_ :movie/title ?movie-title]]
results (d/q all-titles-q db)]
(prn results)
(shutdown-agents)))
I needed to require [datomic.dev-local.impl] to work around the dynaload stuff, Then I got into some problem around cognitect.caster.Caster.thread where a Thread is initialized at the top level which is not allowed/possible in an image.#2021-12-1619:15borkdudeI'll leave it at this, since I don't have the sources to do further digging or make changes to do further experimentation. Usually the above error can be worked around by using a delay around the top level value.#2021-12-1619:16stuarthallowayThanks! Is there a good list of standard gotchas such as the top-level thread issue?#2021-12-1619:18borkdudeFor completeness, here is how to repro:
mkdir -p classes
clojure -M -e "(compile 'devlocal.main)"
$GRAALVM_HOME/bin/native-image -cp classes:$(clojure -Spath) --initialize-at-build-time=. --no-server devlocal.main --no-fallback
We try to capture such caveats here: https://github.com/clj-easy/graal-docs#2021-12-1619:18stuarthallowayAlso @U6VPZS1EK we would be interested to know your motivations for targeting #graalvm. This is nowhere on our priority list but understanding why it is important could help move it up.#2021-12-1619:19stuarthallowayThanks @U04V15CAJ!{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-12-1619:19borkdudeSome classes are restricted like Random, Security (probably related to Random) and Threads to be initialized at build time (which makes sense because they aren't random anymore).#2021-12-1619:20borkdudeAlso stuff like (def x (System/getProperty "user.dir")) should be wrapped inside a delay to prevent them from becoming constants#2021-12-1619:21borkdudeThis is related to --initialize-at-build-time=. which is necessary for clojure, since run time initialization of clojure compiled classes doesn't work (because of what happens in static initializers in such classes: loading from the classpath etc)#2021-12-1620:17genekimHello, @U072WS7PE and team! and you, too, @U04V15CAJ! 🙂
I hope y’all are doing well — I promise to write up a couple of paragraphs in the next day or two on what I’ve done with Datomic Cloud, and my aspirations that are behind my request for @U04V15CAJ’s help.
I think some of the progress we made today might obviate some of this (we had a pairing session this morning, grinding through a couple of other GraalVM native-image issues) , but maybe you’ll find my use case of using Datomic Cloud in Google Cloud Run interesting?
Catch y’all soon, and thanks for all the great info above! 🎉{:tag :div, :attrs {:class "message-reaction", :title "heart"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("❤️")} " 1")}
#2021-12-2412:42borkdude@U6VPZS1EK @U072WS7PE I completely forgot about this:
https://github.com/babashka/babashka/pull/505/files
But someone seems to have made the datomic-pro client work with babashka (which is compiled with graalvm). That means it should also work in the above setup. I'm not sure if there are differences to the pro client that would make sense to you, e.g. the top level thread difference?
This was with [com.datomic/client-pro "0.9.57"]#2021-12-1808:25BenjaminWhen deploying an ion which of my namespaces get loaded? Guess just the ones that contain ions?#2021-12-1809:35danierouxYep.
I have a boot namespace that all my ions include, to do some setup. Just be aware of the on-use restriction - only get a DB on use, not during load.#2021-12-1913:02Ivan Fedorov@U9E8C7QRJ is there any possibility to trigger an event when DB becomes available?
I would use one to ensure migrations set is rolled#2022-01-1315:12donavanWe’re hitting the same issues discussed here. We have an Integrant system that starts our app and the Datomic client is used in a number of the Integrant init keys. 2 issues that arise from this are; the first request to the system has to load the whole app and secondly any errors in loading the system are only surfaced when someone hits the app. We’ve tried loading the app via a lambda but that means the deployment completes before the app starts.#2022-01-1315:21donavanApologies, upon reading that post again my point is rather implicit… We’d also find something like a callback or event that fired when the system was ready for connections really useful#2022-01-1322:07stuartrexkingWe tried this for a while and eventually moved away from Ions. If your app needs any kind of lifecycle management then you are better just deploying an uberjar IMO. See here https://forum.datomic.com/t/datomic-ion-lifecycle-events-or-hooks/1893#2022-01-1315:12donavanWe’re hitting the same issues discussed here. We have an Integrant system that starts our app and the Datomic client is used in a number of the Integrant init keys. 2 issues that arise from this are; the first request to the system has to load the whole app and secondly any errors in loading the system are only surfaced when someone hits the app. We’ve tried loading the app via a lambda but that means the deployment completes before the app starts.#2022-01-1315:21donavanApologies, upon reading that post again my point is rather implicit… We’d also find something like a callback or event that fired when the system was ready for connections really useful#2021-12-1913:09Ivan FedorovHow would one copy a whole Datomic Cloud DB to another while removing personal data in the process?
I’m looking toward Log API, so I would just upload the whole log from A to B
https://docs.datomic.com/cloud/time/log.html
I have a cloud instance with personal data, and I want to move it to another, but in the process I want to replace all personal data with UUIDs.#2021-12-1916:17kennyYep - log api is the only user facing method.
Here’s a POC of an approach you could take.
https://github.com/fulcrologic/datomic-cloud-backup#2021-12-1918:13Ivan Fedorov@U083D6HK9 danks, man!#2021-12-2002:42Drew VerleeI have a couple dynamoDb questions, does anyone have a favorite place to ask those online? Stack overflow?#2021-12-2003:03ghadiHere if datomic related @drewverlee , #aws otherwise#2021-12-2003:03ghadiI'll answer either place :)#2021-12-2007:15popoppoThe doc lists CloudWatch metrics to be monitored.
https://docs.datomic.com/cloud/operation/monitoring.html#metrics
but I cannot find HttpDirectThrottled and HttpEndpointThrottled metrics on my CW.
Are those metrics still available? or am I missing something?#2021-12-2007:16popoppoour version is 781-9041#2021-12-2013:41jaretThe metrics are reported in Cloudwatch when triggered (i.e.this indicates you haven't been throttled), but you should probably see HTTPDirectOpsPending, but if you haven't triggered throttled it won't report.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/viewing_metrics_with_cloudwatch.html:
> "Metrics that have not had any new data points in the past two weeks do not appear in the console."#2021-12-2023:43popoppo@U1QJACBUM
got it. Thanks!!#2021-12-2007:59TwanWe got a nvd warning this morning regarding a dep on Datomic (1.0.6202)
> pkg:maven/com.h2database//cdn-cgi/l/email-protection - CVE-2021-23463
Is this a false positive, a known issue or something that we can safely ignore?#2021-12-2012:48jaretHi @U9M6WJ9PV h2 database is used for dev protocol DBs. I will double check with the dev team and loop back here. Generally, I also recommend that you upgrade to the latest 1.0.6344 when you are able.#2021-12-2013:20TwanBecause of https://clojurians.slack.com/archives/C03RZMDSH/p1633091502190900 we are not able to move to 1.0.6344 yet. Talking about that, is there any ETA on a fix for that issue?#2021-12-2013:24TwanThanks for checking out on the h2 story 🙂#2021-12-2013:26jaretYeah there should be a fix in our next release for the issue you referenced, but you can also work around it by only downgrading the peer-server if you do upgrade your transactor and peer.#2021-12-2013:27jaretAnd to be clear, we are looking at addressing the vulnerability you reported as well. However, development is investigating how to approach.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-12-2019:24jaret@U9M6WJ9PV initial investigation indicates that this CVE is not exposed by Datomic's usage. None-the-less we will be updating our version of H2 in an upcoming release.#2021-12-2019:27TwanCool, thanks! That's good to know#2021-12-2017:32andersAnyone from Datomic support here? We have issues with transactor (on prem) failing to start due to invalid license that we just took delivery of (`Terminating process - License not valid for this release of Datomic`)#2021-12-2019:23jaretHi Anders! I just e-mailed you a temporary license that will work. We have identified the issue with your license creation and will need to deploy new code to my.datomic to address the problem. Apologies for the inconvenience this may cause and thank you so much for reporting this issue!#2021-12-2017:34Alex Miller (Clojure team)prob best to file a ticket https://www.datomic.com/support.html#2021-12-2017:34andersthanks, will do#2021-12-2019:40Ivan FedorovCan I configure an Ions instance to be available only on a private amazon subnet? I’m sorry, I’m bad at AWS. Just a general direction reference would be nice, thanks!#2021-12-2019:56jarethttps://docs.datomic.com/cloud/operation/vpc-access.html. The created VPC for Datomic and the created subnets are private and can be used for Ions.#2021-12-2019:56jaretDepends on what you are after specifically, but that doc is probably a good place to start.#2021-12-2020:50Ivan FedorovThanks @U1QJACBUM! Thats helpful!
I’m looking to deploy a frontend-server EC2 node inside Datomic’s VPC and make it open to the web, but keep the backend only accessible from the VPC#2021-12-2108:49jarppeDoes Datomic on-prem support Postgres 14? I’m getting org.postgresql.util.PSQLException with message “The authentication type 10 is not supported”#2021-12-2108:50jarppeDo I have to downgrade Postgres, or is there a workaround for this?#2021-12-2108:51jarppeI’m using Datomic 1.0.6344#2021-12-2108:56jarppeI guess this is caused by the JDBC driver Datomic uses, which is postgres-9.3-1102, released in 2014!#2021-12-2114:11jaretHi @U0GE2JPNC I've made a story to look into this. I am aware of several customers using Postgres 11, but we don't actively test with every version of Postgres. After investigation if this is indeed related to the driver we will update in a future release of Datomic on-prem.#2021-12-2115:07jarppeGreat! From my googling “postgres The authentication type 10 is not supported” it really looks like a driver issue. When I disable authentication on Postgres side by adding this to pg_hba.conf:
host all all all trust
{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2021-12-2115:07jarppeThen transactor connects successfully#2021-12-2115:09jarppeI’ll try to find time to see what happens if I just put more recent driver jar on transactor classpath#2022-01-1015:31jarppeHi @U1QJACBUM, have you any updates on this?#2022-01-1015:31jarppeI have not tried to update the JAR yet#2022-01-1015:31jarppeI was wondering should I invest some this to this, or should I just wait for update on Datomic#2021-12-2110:23mmeijdenHi, I have a question regarding the AMI lifecycle/updates. We've recently integrated the Datomic instances with Systems Manager and there we found some findings that should be patched (recommended by AWS == required by our Security department). However, these patches require a reboot that triggers the autoscaling group to flag the instance as unhealthy and killing it.
Is there a way to get more often patched/updated AMI's than the regular datomic upgrades, so we can schedule this e.g. daily?#2021-12-2112:41jaretAre you referring to the AMI in Datomic Cloud? Or are you rolling your own AMI with on-prem? What is the recommendation from AWS/Security department? What findings should be patched? Is it a CVE?
I am unsure what you mean with this question: "Is there a way to get more often patched/updated AMI's than the regular datomic upgrades, so we can schedule this e.g. daily?" What is "this"? The system manager update?#2021-12-2115:09tlonistI’m trying to run peer server using a local transactor with MySQL database.
I think I succeeded in running the transactor, but somehow my peer server keeps on failing.
Here is transactor.properties.
protocol=sql
host=localhost
port=4434
license-key=
sql-url=jdbc:
sql-user=datomic
sql-password=datomic
sql-driver-class=com.mysql.cj.jdbc.Driver
memory-index-max=256m
memory-index-threshold=32m
object-cache-max=32m
I created database called datomic, created user/pw granting all with ‘datomic’, and created a table. All according to the guide in bin/sql.#2021-12-2115:09tlonistI’m running this command for peer server
bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d datomic,datomic:#2021-12-2115:10tlonistThe error says
Execution error (ConnectException) at java.net.PlainSocketImpl/socketConnect (PlainSocketImpl.java:-2).
Connection refused (Connection refused)
Any idea on how I can get this work?#2021-12-2115:33jaret@tlonist.sang your peerserver needs to be pointed at the same URL that is output from your transactor. In this case you are pointing peer server at dev to serve a dev DB but you have a sql system.#2021-12-2115:57tlonistAha, thanks for pointing that out!#2021-12-2116:08tlonistNow I’m runing into could not find datomic in catalog problem.
• DB created
• User created
• Table created
• jdbc url properly configured ( I think)
bin/run -m datomic.peer-server -h localhost -p 8998 -a myaccesskey,mysecret -d datomic,datomic:\&password=datomic
#2021-12-2116:09tlonistDo I need to mention transactor port somewhere in the command?#2021-12-2116:11jaretYou have to have the db created to serve the db. Peer server cannot be used to create the db.#2021-12-2116:11jaretYou will need to create a Datomic db on the system then serve it#2021-12-2116:12jaretYou can do this by connecting to your db from a repl and using the peer api to create database.#2021-12-2116:12tlonisthmm, but I manually created a db called ‘datomic’ as directed in bin/sql, create database.#2021-12-2116:13tlonistdoes ‘creating db’ mean something different than creating an actual database in mysql?#2021-12-2116:19tlonistwow, https://docs.datomic.com/on-prem/getting-started/dev-setup.html#:~:text=in%20this%20guide.-,Creating%20a%20database,-In%20a%20separate was complicated. I totally misread ‘creating a database’#2021-12-2116:19tlonistThanks, it works like a charm! wow#2021-12-2115:34jarethttps://docs.datomic.com/on-prem/peer/peer-server.html#running#2021-12-2115:34jaretEssentially you need to pass the -d dbname and URI for your running system. They are described in the documentation for connect as well:#2021-12-2115:34jarethttps://docs.datomic.com/on-prem/javadoc/datomic/Peer.html#connect-java.lang.Object-#2021-12-2121:06wilkerluciohello, I'm trying to find out how in a Datomic transaction can I express some data to be included in the transaction itself, can someone please provide an example of that?#2021-12-2121:13favilado you mean assertions on the transaction entity? i.e. transaction metadata?#2021-12-2121:13wilkerlucioyes\#2021-12-2121:13favila"datomic.tx" is a tempid that will resolve to the current transaction#2021-12-2121:13favila(or also (d/tempid :db.part/tx) for on-prem)#2021-12-2121:14favilaso just use that in place of the entity#2021-12-2121:14wilkerlucioworks like a charm, thanks!#2021-12-2121:15wilkerluciojust in case anybody else comes to this, an example:
(transact conn [[:db/add "entity" :member/name "Wilker"]
[:db/add "datomic.tx" :audit/cid "avasa.csaca.csa"]])#2021-12-2121:15favilahttps://docs.datomic.com/on-prem/transactions/transactions.html#creating-temp-id#2021-12-2300:45jdkealyCan a datomic transactor run as a kubernetes pod?#2021-12-2305:35Drew VerleeI'm guessing that's not a good fit. A pod doesn't persist state between failure.
The guess here is that a transactor needs to hold state.#2021-12-2305:38Drew Verleethis would be relevent https://docs.datomic.com/on-prem/operation/ha.html#2021-12-2305:38Drew Verleegive you can run two transactors to provide HA then maybe a pod is fine, and you could get HA by using datomic pro.#2021-12-2305:40Drew VerleeI'm reeling at the idea of trying to make that work vs just using datomic cloud though.#2021-12-2311:57thumbnailI'm pretty sure the transactor state is persisted in the underlying storage#2021-12-2312:03thumbnailFor development we run both the transactor and the peer in docker containers. It seems to work; but it's not a production environment at all.#2021-12-2313:30Joe LaneI’ve seen transactors run in pods in production. #2021-12-2313:44jdkealyCool, thanks y'all... I think the only concern is probably ensuring that it gets enough RAM.
To me it sounds perfectly suited for a pod. You have failover, it doesn't persist to disk, and if you're already on kubernetes, it greatly simplifies deployments between environments.#2021-12-2315:11Joe LaneWell, the transactor certainly writes to disk if you're using any fulltext attributes OR if you're using Valcache with the transactor. You may be able to configure a pod to have an ephemeral NVMe SSD for Valcache, I'm not sure.#2021-12-2317:59jdkealyWhy would I be getting Error communicating with HOST datomic-service on PORT 4334
My kubernetes transactor pod is connecting to postgres and writing its location to storage, in the cluster the service address is datomic-service.#2021-12-2318:01jdkealyport 4334 is exposed
{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "6b1904041f2b070618460a1b1b465d5e090d5c530d530f09461a5e11525e"}, :content ("[email protected]")}#2021-12-2318:01favilaIs the peer in the cluster too?#2021-12-2318:02favilapeers communicate with storage independently (not via the transactor)#2021-12-2318:02favilaso datomic-service needs to resolve to postgres for peers, or else you can add alt-host= with an alternative hostname#2021-12-2318:02jdkealyyes the peer is in the cluster#2021-12-2318:04favilahost= is also the bind address for the transactor. What does datomic-service resolve to on the transactor? Did the transactor start up correctly?#2021-12-2318:04jdkealythe transactor started correctly#2021-12-2318:05jdkealydatomic-service resolves to itself on the transactor#2021-12-2318:06jdkealyLaunching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc driver ...
System started datomic:sql://<DB-NAME>?jdbc:, you may need to change the user and password parameters to work with your jdbc driver
#2021-12-2318:09favilais “resolves to itself” specifically what IP address? The exact same one: 10.100.252.126?#2021-12-2318:09favilaor maybe some internal address? or loopback, or…#2021-12-2318:09jdkealytelnet: can't connect to remote host (10.100.252.126): Connection refused#2021-12-2318:10jdkealythis is on the transactor#2021-12-2318:10jdkealyso yes, they're the same IP#2021-12-2318:13favilaI guess check with netstat#2021-12-2318:15jdkealythe transactor is just showing connections to postgres#2021-12-2318:17jdkealythe peer also shows a connection to postgres#2021-12-2318:20jdkealythis was all working before i attempted to put it in kubernetes. Same DB, which i wiped and reconnected.#2021-12-2318:20jdkealywhen re-connected, it correctly updated the host url#2022-12-2910:32Benjamin(require '[datomic.ion.cast :as cast])
(cast/initialize-redirect :stdout)
(cast/dev {:a "foo"})
1. Unhandled java.lang.IllegalArgumentException
No implementation of method: :-dev of protocol:
#'datomic.ion.cast.impl/Cast found for class: nil
core_deftype.clj: 583 clojure.core/-cache-protocol-fn
core_deftype.clj: 575 clojure.core/-cache-protocol-fn
impl.clj: 14 datomic.ion.cast.impl/fn/G
cast.clj: 74 datomic.ion.cast/dev
cast.clj: 65 datomic.ion.cast/dev
REPL: 423 support-bot.slack/eval29681
REPL: 423 support-bot.slack/eval29681
Compiler.java: 7177 clojure.lang.Compiler/eval
Compiler.java: 7132 clojure.lang.Compiler/eval
core.clj: 3214 clojure.core/eval
core.clj: 3210 clojure.core/eval
interruptible_eval.clj: 87 nrepl.middleware.interruptible-eval/evaluate/fn/fn
AFn.java: 152 clojure.lang.AFn/applyToHelper
AFn.java: 144 clojure.lang.AFn/applyTo
core.clj: 665 clojure.core/apply
core.clj: 1973 clojure.core/with-bindings*
core.clj: 1973 clojure.core/with-bindings*
RestFn.java: 425 clojure.lang.RestFn/invoke
interruptible_eval.clj: 87 nrepl.middleware.interruptible-eval/evaluate/fn
main.clj: 437 clojure.main/repl/read-eval-print/fn
main.clj: 437 clojure.main/repl/read-eval-print
main.clj: 458 clojure.main/repl/fn
main.clj: 458 clojure.main/repl
main.clj: 368 clojure.main/repl
RestFn.java: 1523 clojure.lang.RestFn/invoke
interruptible_eval.clj: 84 nrepl.middleware.interruptible-eval/evaluate
interruptible_eval.clj: 56 nrepl.middleware.interruptible-eval/evaluate
interruptible_eval.clj: 152 nrepl.middleware.interruptible-eval/interruptible-eval/fn/fn
AFn.java: 22 clojure.lang.AFn/run
session.clj: 218 nrepl.middleware.session/session-exec/main-loop/fn
session.clj: 217 nrepl.middleware.session/session-exec/main-loop
AFn.java: 22 clojure.lang.AFn/run
deps :
com.datomic/ion {:mvn/version "0.9.50"}
com.datomic/client-api #:mvn{:version "0.8.54"}
com.datomic/client #:mvn{:version "0.8.111"}
com.datomic/client-cloud #:mvn{:version "0.8.102"}
com.datomic/client-impl-shared #:mvn{:version "0.8.80"},
calling cast locally throws an exception
do you know why?#2022-12-2914:05Søren SjørupWith this query in com.datomic/datomic-pro {:mvn/version "1.0.6344"}
(q
'[:find (count ?artist) .
:with ?artist
:where [1 :a ?artist]]
[[1 :a 1]
[1 :a 1]
[1 :a 2]])
I get java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1 during query evaluation. But I would expect a parser time error message like :find and :with should not use same variables: [?artist] that datascript reports. Can/should I report this somewhere?{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2022-12-2923:56kennyCurious: are regular events with a body similar to the below expected?
{
"Msg": "NotifierLoopFailed",
"Host": "...",
"Port": 5555,
"ExMsg": "Connection refused",
"ExClass": "java.net.ConnectException",
"Type": "Event",
"Tid": 176,
"Timestamp": 1640802730379
}#2022-12-3010:06BenjaminI'd like to have an attribute that is a boolean "flag" like "active" is there an idiomatic way? I was thinking to retract and assert the fact but it doesn't really fit I think because I want to show "inactive" in the ui.
Do I use an attribute with "bool" value?#2022-12-3011:45Ben Slessdb.type/boolean#2022-12-3011:45Ben SlessBut you can also perform historical queries#2022-12-3011:46Benjamin:thumbsup:#2022-12-3011:48Ben SlessAdding the "active" property will also complicate all other business logic because you have to add another clause for it. On the other hand if you save history then the active property doesn't add data#2022-12-3015:27kennyI advise against using history for any domain information. Only use it for audit purposes.{:tag :div, :attrs {:class "message-reaction", :title "slightly_smiling_face"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙂")} " 1")}
#2022-12-3015:48Ben SlessWhy?#2022-12-3015:51kennyIt is not designed for that use case. Large perf penalties, constraints on what you can do. Valentin has a good post on this topic: https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html#2022-12-3110:02Ben SlessThis reads like bitemporality#2022-01-0519:57ennWe are getting local (development) transactor processes dying with exit code 137. Is there documentation for transactor exit codes somewhere?#2022-01-0519:57ghadijvm 137 is OOM#2022-01-0520:10enn👍 thanks Ghadi!#2022-01-0522:10jdkealyI'm having trouble connecting to a console running in a docker container using postgres storage
bash-4.3# psql -U datomic -h postgres
Password for user datomic:
psql (9.5.13, server 13.3)
WARNING: psql major version 9.5, server major version 13.
Some psql features might not work.
Type "help" for help.
datomic=> c
^ connection is correct
bash-4.3# /opt/datomic-pro-0.9.5561/bin/console -p 8080 sql datomic:sql://?jdbc:
[1] 898
bash-4.3# Console started on port: 8080
sql = datomic:sql://?jdbc:
Open in your browser (Chrome recommended)
open the browser and i get
The server requested password-based authentication, but no password was provided. trying to connect to datomic:sql://?jdbc:, make sure transactor is running#2022-01-0522:12jdkealymy clojure container can connect without issue#2022-01-0614:46matthaveneryou need to quote the command line arguments because bash interprets & as “run in the background”#2022-01-0614:46matthavenerlike this:
/opt/datomic-pro-0.9.5561/bin/console -p 8080 sql 'datomic:sql://?jdbc:'
#2022-01-0620:23jdkealythanks!#2022-01-0622:13JohnJcom.datomic:memcache-asg-java-client:jar:1.1.0.32 seems to be missing from the repo (latest datomic starter)#2022-01-0622:41jaret@jjaws Could you explain what you are doing? The lib is packaged in the zip? Can you download the zip and bin/maven-install.#2022-01-0622:43jaretNever mind, I think I understand the issue! I believe I missed a step when releasing!#2022-01-0623:04JohnJyeah, should have stated 'remote repo'#2022-01-0623:05JohnJCurious, any reason com.amazonaws/aws-java-sdk-ec2 wasn't updated to 1.12.100 too?#2022-01-0623:33jaret@jjaws The remote repo should now be fixed for memcached! Sorry about that. re: aws We did updated that in 1.0.6362. Are you getting it from client or from memcache-asg as a transitive dep? Or from something else?#2022-01-0623:33jaretOh -ec2! I see. I'll have to ask.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-01-0710:06heliosgiven a entity - attribute pair, is there a way to retrieve the tx that added the attribute without doing a query on history db?#2022-01-0713:25favilaI’m assuming by “added the attribute” you mean added any datom looks like [?e ?attr _ ?tx true] and you want the ?tx , not that you want the tx that added the attribute (schema) itself.#2022-01-0713:25favilaIf you know the datom is “current” (i.e. is asserted on your “now” db) you don’t need the history db.#2022-01-0713:26favila:where [?e ?attr _ ?tx true] ?tx is the transaction that asserted the datom#2022-01-0713:26favilaotherwise you need the history db, and you need to decide what it means if the attr is asserted+retracted multiple times#2022-01-0714:58helios@U09R86PA4 thanks, you're right i wasn't precise with 'added'. My intention is more like: except from a datalog query, is there any other way to retrieve the last transaction on a given entity?#2022-01-0714:59helioswas looking something like (d/tx e attr) in the entity api#2022-01-0715:00heliosMy point is that i'm storing in an attribute some value which changes over time, and i'd like to see the "when was it last updated" (the txInstant). I know how to do it easily with a datalog query
(d/q
'[:find ?tx ?attr ?val ?added
:in $ ?e
:where
[?e ?attr ?val ?tx ?added]]
(d/history my-db)
my-eid)
#2022-01-0711:35FiVoI am trying to connect to datomic and getting activemq-version.properties is not available as an error. The stacktrace goes through the connector and the artemis client. Any ideas?#2022-01-0713:55FiVoSeems to here: https://github.com/apache/activemq-artemis/blob/e364961c8f035613f3ce4e3bdb3430a17efb0ffd/artemis-core-client/src/main/java/org/apache/activemq/artemis/utils/VersionLoader.java#L43#2022-01-0814:29jaretWhat version of Datomic and how are you connecting?#2022-01-0817:27FiVocom.datomic/datomic-pro "1.0.6202"#2022-01-0817:29FiVodatomic on-prem with in-process peer library#2022-01-0817:29FiVothe transactor runs remotely#2022-01-1013:27jaretSo I'd recommend running the latest datomic-pro (http://my.datomic.com/downloads) and ensure you are using an LTS version of java (8,11,17). Then let me know if you still see the error.#2022-01-1115:27jaretI have heard anecdotally that this problem might be because you are using Java 18. I have not yet tested today, but wanted to share that this might be a breaking change that we will have to account for to support Java 18.#2022-01-0808:39Benjaminwhat is a good way to upload a file to s3 from an ion?#2022-01-0808:49furkan3ayraktarYou can use Cognitect’s aws-api to upload files. Or you can directly use Amazon’s S3 Java library. {:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2022-01-0916:06teodorluOut of historical curiosity.
Is anyone aware of whether Datomic pull influenced GraphQL, or the other way around? Or is there a "common ancestor" both credit as inspiration?#2022-01-0922:24thumbnailI think it's a fairly common pattern. SQL uses select for example. Interesting thought though{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-01-1013:58souenzzoDatomic was released in 2012
GraphQL was released in 2015
select is more like datomic :find
(P)SQL nowdays has some pull-like features
the idea of have two reading interfaces, the pull and the :find, in a single database is pretty original, AFIK{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-01-1016:08jarethttps://forum.datomic.com/t/cognitect-dev-tools-version-0-9-70-now-available/2024#2022-01-1018:09jdkealyIs it possible to update the transactor host address ?#2022-01-1019:26jaret@jdkealy You can set host and alt in the properties file. Is that what you mean?#2022-01-1020:00jdkealyI mean I'm putting the host as an EC2 public DNS, if I was to restart the instance or decide to upgrade and the host name changed, would i recover ?#2022-01-1114:49jaretAre you putting your transactor on the internet available externally?#2022-01-1115:28jaretAlso did you roll your own AMI for the transactor? Or did you use Datomic's provided AMI?#2022-01-1222:47jdkealyrolled my own AMI and it's only available in-network#2022-01-1222:47jdkealythere's a firewall#2022-01-1021:55jacekschaeIs it ok to use :db/id as external id? Is there anything one should be aware of? Differences between Cloud and On-prem?#2022-01-1022:00favilaDon’t use it as an external id#2022-01-1022:07jacekschaeThanks for your reply. Could you please provide more reasoning behind this?#2022-01-1022:09favilaYou can’t control them, e.g. reassert them into a different datomic db#2022-01-1022:12jacekschaeThanks. Are you maybe aware of any recommendations for external ids using Datomic Cloud?#2022-01-1022:14favilarandom UUIDs are a safe bet, and they have the nice property that you know them before you transact them which reduces coordination headaches#2022-01-1022:16jacekschaeThis is what I use and was wondering if there is anything better. Thanks for sharing your knowledge.#2022-01-1101:29souenzzodb/id can change in operations like backup/restore, and you can't control it
I already used :db/id for external operations, like form submissions. I think that for these cases is OK to use db/id's because it is short-lived interactions
But for things like URL's, never use it.#2022-01-1103:59tony.kayI would still use squuids instead of random ones. The docs say you can get away with random, but they still affect things in large databases. I’m using https://github.com/yetanalytics/colossal-squuid#2022-01-1106:08jacekschaeThanks souenzzo. @U0CKQ19AQ funny that you mentioned colossal-squiid as I was looking at https://github.com/danlentz/clj-uuid, which seems to have similar functionality -- it's only CLJ.#2022-01-1110:34magnarsWhat's the advantage of colossal-squuid over (datomic.api/squuid) ? @U0CKQ19AQ#2022-01-1111:17Linus EricssonI thought colossal-squuid could control which timestamp to use, but no. But it is not tied to datomic jars that can be problematic to distribute and is written in .cljc.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-01-1111:19Linus EricssonAnd yes, don't use :db/id:s (i do it myself in a certain part of the system but will of course regret it later). A separate id is also convenient when doing decantering (selective rewriting) and other transformations of the database.#2022-01-1112:23jacekschae(datomic.api/squuid) is not available in Datomic Cloud{:tag :div, :attrs {:class "message-reaction", :title "point_up_2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👆")} " 2")}
#2022-01-1322:50Jake Shelbysquuids are no longer required in Datomic, as per this discussion (because of adaptive indexing) https://forum.datomic.com/t/why-no-d-squuid-in-datomic-client-api/446/2#2022-01-1322:51tony.kayYep, that’s what the docs say, and it is “truthy”. But if you use UUIDs as a standard unique identity on entities, it is HIGHLY recommended#2022-01-1322:52tony.kayIt boils down to the fact that most databases use “recent” data. So in, say, an VAET index you’d like all of your “recent” stuff to sort together, and be likely to be grouped into segments that will likely already be in RAM. I was told, specifically, by Cognitect, that I should be using SQUUIDs in Cloud just a few weeks ago.#2022-01-1322:54tony.kayI was originally using random ones because of the exact doc you’re quoting#2022-01-1112:48cl_jHi everyone, does datomic have something like SQL delete from table where condition , i.e., to delete all entities by some condition?#2022-01-1112:58jacekschaeIn Datomic Cloud there is https://docs.datomic.com/cloud/transactions/transaction-functions.html#db-retractentity..
This does not delete it makes it "hidden". Datomic On-Prem supports https://docs.datomic.com/on-prem/reference/excision.html, which is not supported with Datomic Cloud.#2022-01-1113:01cl_jDoes this mean to remove data for privacy reasons, the only option is to use excision?#2022-01-1113:03magnarsYes, excision is the tool to use for privacy concerns.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-01-1113:04magnarsThere is retractEntity in Datomic Peer also, just to make that clear. 🙂{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-01-1113:30cl_j@U8A5NMMGD what's the options to delete data if we use datomic cloud?#2022-01-1113:48jacekschaeAFIK there is no option to delete from Datomic Cloud. Workarounds include: storing personal data in different db or using encryption key and then removing the key at appropriate time so that you can't access the information later. Both of them are not ideal and come with their own set of challenges. @U1QJACBUM would there be anything to add? PS. Is there a place where we can vote for features for Datomic Cloud? I feel like excision would be pretty high on that list?#2022-01-1114:00jaretWe monitor customer feature feedback and "votes" in three ways (http://forums.datomic.com , http://ask.datomic.com, and support cases). When an issue/discussion in one of these places is around a feature (like excision) we cross link. We add the context to our internal stories in shortcut for context when looking at features for development. We are aware that excision in Cloud is a feature users desire.
regarding this thread I think it is important to state that we want to understand in what scenario you (@U53B6QVDX) would like to "delete" data from Datomic cloud. The start of this thread simply asks about SQL like conditional deletes. Perhaps there is another driving reason beyond privacy (GDPR) that is causing you to consider the option of deleting and I would like to understand that.
But @U8A5NMMGD is correct. There is no "delete" or excision in Datomic Cloud.#2022-01-1114:08cl_jThanks @U1QJACBUM! Yes, we are deleting data for privacy reasons. And I am currently using :db/retractEntity, deleting more than 100k entities can be very time consuming, that's why i ask whether there is something easier and faster#2022-01-1114:14jaretHi @U53B6QVDX a few important clarifications. Retractions is not deletion. The data is still there. With retract you have created an atomic fact in the database dissociating an entity from a particular value of an attribute. The fact of that retraction remains along with the history of it's previous values.#2022-01-1114:15jaretI am happy to look at performance questions around retractEntity. We can help you speed that up or consider another approach so that it isn't as time consuming.#2022-01-1114:16jaretIf that's something you are interested in shoot me a line at <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> with a gist of what you are doing and I can work from there to determine what performance aspects are and what advice might be relevant.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-01-1113:34Benjamindo you know what SecureString means for aws parameter store? And what is a good way to put/get secure string paramaters?{:tag :div, :attrs {:class "message-reaction", :title "white_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✅")} " 1")}
#2022-01-1114:00jacekschaeSecureString means it's encrypted and by using aws apis for getting the string out you will get it decrypted at read time. If you are using Datomic Cloud you can use https://docs.datomic.com/cloud/ions/ions-reference.html#get-params to get it from aws ssm; see https://github.com/jacekschae/learn-datomic-course-files/blob/f2378c84bade5cb64018f72aa9179a8c8bb25df4/increments/complete/src/main/cheffy/ion.clj#L11.#2022-01-1114:01BenjaminI see#2022-01-1114:01Benjaminwould you use params over secrets manager?#2022-01-1114:04jacekschaeIf i'm using Datomic Cloud I would use SSM -- AWS Systems Manager as it comes with ion helpers. Don't have a good overview of why would you pick Secrets Manager over Systems Manager.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-01-1116:52Michael WSecrets Manager costs per encrypted string and allows you to enforce lifecycle and rotation. Systems Manager gives you an encrypted string, and you have to handle lifecycle and rotation yourself. I prefer SSM myself since it's cheaper, and a simple lambda function can do lifecyle and rotation if needed.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-01-1116:33Benjamin[datomic.ion.cast :as cast]
[datomic.ion :as ion]
(cast/initialize-redirect :stdout)
(cast/dev {:msg "SlackApi"})
=>
1. Unhandled java.lang.IllegalArgumentException
No implementation of method: :-dev of protocol:
#'datomic.ion.cast.impl/Cast found for class: nil
core_deftype.clj: 583 clojure.core/-cache-protocol-fn
core_deftype.clj: 575 clojure.core/-cache-protocol-fn
impl.clj: 14 datomic.ion.cast.impl/fn/G
cast.clj: 74 datomic.ion.cast/dev
cast.clj: 65 datomic.ion.cast/dev
what do I do wrong with local cast ?#2022-01-1117:08Ivan FedorovHeyy, nice to meet you all again!
How would one test an ions lambda before production deployment?
Any pre-made handles to simulate the message wrapping by AWS?#2022-01-1322:04stuartrexkingHandlers are just functions.#2022-01-1322:04stuartrexkingCall them with whatever body they will get called with by Lambda.#2022-01-1316:00kennyCan I :find the count of all unique combinations of my :find vars? e.g., In :find ?name ?duration, I want the query to return the count of all unique [?name ?duration] tuples.{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2022-01-1316:03favilaif you’re on-prem, just count the result of the query. There’s no overhead vs doing the count in the query{:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 1")}
#2022-01-1316:03favilaif you’re client or cloud: :where [(tuple ?name ?duration) ?name+duration] :find (count ?name+duration){:tag :div, :attrs {:class "message-reaction", :title "white_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✅")} " 1")}
#2022-01-1318:22kennytuple is the magic I was looking for. Thank you.#2022-01-1317:22prncDatomic Cloud: Is the :db.unique/identity & upsert behaviour different for composite tuples?
For a unique :my-id I can transact this multiple times:
(d/transact conn {:tx-data [{:my-id #uuid "cffeaaf9-861c-4aee-857f-dd0d704aa608"}]})
But not...
(d/transact conn {:tx-data [{:x 101155069823622
:y "bar"}]})
For unique :x+y tuple.
Seeing Unique conflict: :x+y, value: ...#2022-01-1318:04favilaTo use a composite tuple with upsert effectively, you need to assert the new composite yourself#2022-01-1318:05favilaTempid resolution happens before composite-tuple value updating#2022-01-1318:07prncSo in the example above, would need to add :x+y e.g.
{:x 101155069823622
:y "bar"
:x+y [101155069823622 "bar"]}
?#2022-01-1318:07favilayes#2022-01-1318:08prncAwesome, thanks, will check if that works fine!#2022-01-1318:10prncIf that’s the case, this sentence “Composite attributes are entirely managed by Datomic–you never assert or retract them yourself.” (https://docs.datomic.com/cloud/schema/schema-reference.html#composite-tuples) might be a bit confusing? It did threw me off. Maybe just a misunderstanding on my part.#2022-01-1318:26favilaThis is an edge case#2022-01-1318:27favilanormally, that’s right. identity messes that up#2022-01-1318:27favilaCognitect can speak to whether they think it’s a bug#2022-01-1318:28prncThanks for clarifying 🙏
I just mentioned the docs, because they put me on the wrong track 😉
Good to know about the edge case!#2022-01-1322:41Jake ShelbyIn case you wanted info from the datomic team https://forum.datomic.com/t/db-unique-identity-does-not-work-for-tuple-attributes/1072/2#2022-01-1416:50ennIs there a way to express something like a case in Datalog? That is, I want to choose one of a number of different sets of clauses to use in the query by comparing a logic variable against a number of value literals.#2022-01-1417:19Lennart BuitYou can ‘or-join’, and in each branch do something like this:
(and [(identity ?condition) :SENTINEL] …)
#2022-01-1417:20favilaThis is better (and [(ground :SENTINEL) ?condition] …) but the same thing{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2022-01-1417:21favila(or-join [?dispatch-val ?out]
(and [(ground :case1) ?dispatch-val]
[(ground 1) ?out])
(and [(ground :case2) ?dispatch-val]
[(ground 2) ?out]))#2022-01-1417:22favilaor as named rules:
[[(myrule ?dispatch-val ?out)
[(ground :case1) ?dispatch-val]
[(ground 1) ?out]]
[(myrule ?dispatch-val ?out)
[(ground :case2) ?dispatch-val]
[(ground 2) ?out]]]
#2022-01-1417:23favilaThe basic approach is just to implement a rule multiple times, and have some condition in each implementation that makes them non-overlapping for a given input#2022-01-1417:26Lennart Buit^important to realize: the results of the branches of an or-join are union’ed, that's why you need to make them disjoint. (I’ve fallen in that hole before :p)#2022-01-1418:29ennThanks, this makes sense. Much appreciated.#2022-01-1514:41Benjamincognitect.aws.credentials CredentialsProvider can I coerce this to com.amazonaws.auth.AWSCredentialsProvider ?#2022-01-1514:56ghadiNo these are unrelated#2022-01-1515:10kennyHowever, you can wrap it pretty easily 🙂 e.g.,
(defn create-AWSCredentialsProvider
"Wraps aws-creds/CredentialsProvider in the AWSCredentialsProvider interface for use
in the AWS Java SDK."
[assume-role-map]
(let [provider (auto-refreshing-credentials-provider assume-role-map)]
(reify AWSCredentialsProvider
(getCredentials [_]
(when-let [creds (anom/anomaly! (aws-creds/fetch provider))]
(reify AWSSessionCredentials
(getAWSAccessKeyId [_] (:aws/access-key-id creds))
(getAWSSecretKey [_] (:aws/secret-access-key creds))
(getSessionToken [_] (:aws/session-token creds)))))
(refresh [_])))){:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-01-1712:57timo#2022-01-1808:13cmdrdatsgood morning! I'm looking for more examples of the on-prem datomic metaschema to trinodb for analytics setup - the docs seem pretty thin 😕 anyone know of any other examples?#2022-01-1809:54dazldcould someone share an up to date comparison between datomic on-prem and cloud? it’s been a couple of years - the last time I looked at cloud there were some subtle differences iirc.#2022-01-1816:18tony.kayThe transaction log listener stuff is trivial to simulate, since there is a tx log. The lack of excision is a bit more difficult to work around, and up until now there is no official way to do a backup/restore of the database in cloud.#2022-01-1816:18tony.kayThough the backup one is a top priority and is supposed to be coming “very soon”#2022-01-1909:56dazldthanks tony!#2022-01-2020:14bhurlowno d/entity in cloud#2022-01-2020:15bhurlow(yet)#2022-01-2110:36dazldoh, interesting! d/entity is useful, but I can see why it’s not a priority. It is a bit magical..#2022-01-2521:20bhurlowI think it's more of a performance issue, d/entity is not wire friendly as you could be doing an i/o or request to get a single attribute#2022-01-2609:28dazldah, I think the non-uniform perf issue is a consequence of the magic in this case, but yes, exactly.#2022-01-1809:55cmdrdatstwo big things that stand out for me is the lack of being able to listen at the transaction log and no excision#2022-01-1810:02dazld@cmdrdats thank you - I didn’t know about either of those. thanks for sharing.#2022-01-2013:15cmdrdatsanother metaschema question... It seems like the metaschema data is not actually ETL'ing out of Datomic, rather querying Datomic directly - if this is the case, is there a way to specify a specific value of the db you want to query? so if I want to run a SQL query against yesterday's db state, for example..#2022-01-2013:50favilaYes it is querying and no there is no way. They must be getting that feature request a lot though #2022-01-2013:51cmdrdatscool, thanks 🙂 that would be a super handy thing to be able to do 😅{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 3")}
#2022-01-2020:39bhurlowHas anyone here had to "decant" a Datomic database to clear out some bad data in the tree?#2022-01-2020:40bhurlowI'm wondering if the best approach is to iterate through the transaction log#2022-01-2020:43favilaIf you can tolerate it being in history, just retraction should be fine. (If you can’t, make sure you’re not falling into this trap https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html)#2022-01-2020:44favilaif you can’t tolerate it in history and are using on-prem, consider excision#2022-01-2020:44favilaif excision isn’t possible, then iterating and transforming the transaction log is your only option#2022-01-2021:34bhurlowThanks @U09R86PA4, the angle here is not data privacy but rather performance. We've identified some nodes in our tree that have some unfortunately large string values (megabytes +) and we're observing some very high read and write impacts in ddb. I've read that excision does not necessarily help in this case. We do not need the history for business logic#2022-01-2021:34favilaoh, oof#2022-01-2021:34favilayeah, I’ve had that pain#2022-01-2021:34bhurlowyea 🤒#2022-01-2021:35favilaexcision can get rid of it, though#2022-01-2021:35favilabut fortunately the decant should be relatively straightforward if you have a solid plan for putting those strings somewhere else#2022-01-2021:36bhurlowok noted. We do have a strategy now, we didn't then#2022-01-2021:36favilawe have a rule in our database (checked by test suite) that every string must have an attribute predicate that limits length#2022-01-2021:36bhurlowoh nice idea#2022-01-2021:36bhurlowwe'll definitely do that#2022-01-2021:37bhurlowWhen you had this issue, where do you see most of the impact? What we're seeing is periodic storage IO spikes that align with the transactor index rebuilds#2022-01-2021:39favilaUnfortunately it was really difficult for us to put our finger on it. It manifested as drag on the entire system#2022-01-2021:40favilathe biggest problem we had was fulltext indexes would occasionally produce huge index merges, but that’s only for the fulltext-indexed values#2022-01-2021:40favilathere’s no way to drop fulltext from an attribute, so we had to actually physically move those values to a different attribute#2022-01-2021:41bhurlowyep, this is our exact experience as well#2022-01-2021:41bhurlowdid you then excise the fulltext ones or just stop putting them?#2022-01-2021:42favilabut we also saw unpredictable big index sizes, large uncacheable segments in memcached, and inspecting the segments often they were DirNodes > 1MB#2022-01-2021:42favilawe just stopped putting them to stop future writes. Excision at our scale would have been untenable#2022-01-2021:42favilawe are preping for another round of decanting though#2022-01-2021:43bhurlowgot it, this is all very familiar, appreciate the wisdom#2022-01-2021:44bhurlowon the decant, I'm assuming you're using some global identifier to negotiate the entity IDs?#2022-01-2021:46favilaYeah fortunately we did a decant ~2-3 years ago, the purpose of which was to renumber entity ids with partitioning for performance (I wasn’t there). During that time all :db/id dependencies were shaken out of the code so it’s resilient to decants using ids we control ourselves{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-01-2021:47favilathe decant itself would just keep track of the entity id mapping; we also injected an additional assertion (long-type not ref type) with the entity-id of the entity in the previous system, which was good for correlating later#2022-01-2021:49bhurlowthat's a good tip#2022-01-2021:52bhurlowdo 100% of your entities have unique IDs? we have some obvious high level domain entities which get UUIDs, but there are certain things like a referenced list of settings that do not. Seem like those would need them, or should be refactored to be flattened onto the main entities#2022-01-2021:52bhurlowtable brain#2022-01-2021:54favilaAlmost all of them do#2022-01-2021:54bhurlownoted. thanks a bunch#2022-01-2022:08favilaSomething you might consider if you really don’t need history is to copy every assertion at a time T, then do a partial decant from T to now. That might be better or worse depending on your circumstances{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-01-2022:15bhurlowyou mean to enable doing it in chunks?#2022-01-2022:21favilaThe “copy assertions at time T” part is to throw away history. You get a smaller target db, it’s faster than replaying every tx, and you avoid having to deal with any weirdness in the distant past of your tx log. The partial decant is just to reduce downtime--whatever happened in the db while you were doing the bulk copy. If you can tolerate downtime you don’t need it.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-01-2315:14Benjamincan an ion lambda be invoked "concurrent" by default? Or does each invoke happen one after the other? I'm wondering what I need to do to make a lambda idempotent. I'd be easy to put a fact into datomic saying "I handled this thing"#2022-01-2320:51Joe LaneIon Lambda invocation by default is concurrent.
What problem are you trying to solve? I think focusing on Lambda's might not be the best approach here, they are just a mechanism.#2022-01-2418:24magnarsAny thoughts on why I'm getting "activemq-version.properties is not available" when trying to connect to datomic-free? This is an old hobby project where this has been working without a hiccup for years. The only change I can think of is that we've started using datomic-pro at work, so now I have some credentials for http://my.datomic.com in ~/.m2/settings.xml#2022-01-2418:25magnarsHowever, temporarily removing that file doesn't seem to impact me being able to connect to the old datomic-free version.#2022-01-2418:38magnarsI even tried rm -fr .cpcache in desperation.#2022-01-2418:45favilaare you running an uberjar, or do you have an uberjar on your classpath?#2022-01-2418:52magnarsGood question. I am invoking the transactor by cd-ing to the unzipped folder, and running bin/transactor . The client gets com.datomic/datomic-free from maven.#2022-01-2418:53magnarsThis is an old project with an old datomic version "0.9.5697" - but it's been running perfectly for years.#2022-01-2418:55magnarsI'm having a hard time understanding how starting to use datomic-pro for work could impact this, but it's the only seemingly relevant change I can see in between it working and not.#2022-01-2419:16JohnJIs the transactor's version 0.9.5697 too?#2022-01-2419:17magnarsI'm not sure why, but it's running 0.9.5561#2022-01-2419:17magnarsI'll try changing the client to match that.#2022-01-2419:18magnarsthat didn't make a difference, unfortunately.#2022-01-2419:19magnarsI realize I'm running on a very old version, using the outdated datomic-free-version. It might be time to upgrade, but that's a couple of evenings that I don't really have for an old hobby project. And it's a mystery to me how it just now stopped working.#2022-01-2419:30magnarsI can't believe it, but I just found the culprit.#2022-01-2419:30magnarsIt's https://github.com/clojure-emacs/enrich-classpath#2022-01-2419:30magnarsA new feature in CIDER.#2022-01-2419:31magnarsTuring "enrich-classpath" off solves the issue.#2022-01-2419:36JohnJhappier with inf-clojure 😉#2022-01-2507:52Pieter SlabbertI have a transaction id and I want to be able to see everything done in that transaction, but so far I can't find a way to do that.
Something like
(d/q '[:find ?e ?attr ?val
:in $ ?tx
:where
[?e ?attr ?val ?tx true]]
db tx-id)
Gives me :db.error/insufficient-binding Insufficient binding of db clause: [?e ?attr ?val ?tx true] would cause full scan
Is there a way to do this?#2022-01-2508:05magnarsMaybe you could use tx-range with log ?#2022-01-2509:06Pieter Slabbert@U07FCNURX Thanks! That gave me exactly what I needed!{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2022-01-2518:31Jem McElwaindoes anyone have any advice about valcache on eks/k8s? i was originally planning on running our peers as statefulsets to simplify the storage topology, but we’ve recently hit a point where statefulset rollout would be too slow… wondering about strategies for warming dynamically provisioned EBS volumes. was considering running some kind of sync from a RWX EFS mount, but worried about increasing complexity of our solution#2022-01-2521:22ghadiuse dynamo + valcache#2022-01-2521:22ghadinot sure you need anything on EBS#2022-01-2600:08Jem McElwainsorry, i’m not sure i understand what you’re suggesting. the peers to need to provision storage dynamically, which in the scope of EKS would be EBS volumes#2022-01-2601:30ghadiI don’t think you need EBS on peers @U02AEH4M8GY #2022-01-2601:32Jem McElwainthe alternative to EBS is using direct attached storage on the eks ec2 nodes, but this is a huge anti-pattern in k8s land#2022-01-2601:35ghadiI want to know why you think you need ebs#2022-01-2601:37Jem McElwainbecause the data for valcache needs to live on an ssd#2022-01-2601:37Jem McElwainper the documentation#2022-01-2602:10ghadiok you're using valcache, thanks.#2022-01-2602:11ghadiand now I see that you wrote that in the first sentence. facepalm forgive me, I have a new infant @U02AEH4M8GY{:tag :div, :attrs {:class "message-reaction", :title "heart"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("❤️")} " 1")}
#2022-01-2602:18Jem McElwain@U050ECB92 congrats!#2022-01-2602:19ghadithanks! would it be possible to provision an ephemeral PV with a good storage class?#2022-01-2602:20ghadiit doesn't need to persist / migrate with pods / be stateful#2022-01-2602:20Jem McElwainyes, totally, i have provisioning dynamic/ephemeral volumes all sorted, but given our shift to k8s is coupled with our desire to increase deployment cadence, i’m worried about cache locality suffering a lot if we’re deploying frequently#2022-01-2602:20Jem McElwainsince the volumes will be totally cold each time a peer starts#2022-01-2602:21ghadiyou should open up a question with the datomic team (see topic). they'll have helpful answers (I don't work for them)#2022-01-2602:21ghaditotally get your situation now (finally 🙂 )#2022-01-2606:47cmdrdatssigh. really? https://gist.github.com/favila/f33518c7e72a4079b5948d2f853053b0#2022-01-2610:43tatutI don't understand what you are asking here.#2022-01-2611:52cmdrdatsmostly I'm sad that transaction functions can't be aware of its surroundings.. so if you implement a :db/inc function, if you somehow [[:db/inc e :stock/qty] [:db/inc e :stock/qty]] it will increment by only 1 instead of 2, and there's simply no way around it.
This matters when you're composing many different movements into a single transaction, so if I move money from account A to B and from A to C (maybe payment + bank fees)..#2022-01-2611:55cmdrdatsthe link above introduces a nonce that will protect it from happening (so throw exception) - but that doesn't answer the actual need.. to do it reliably, you'd have to collate it outside the transaction..#2022-01-2613:07favilaIf you have a model where transactions are commands applied atomically, I’m not sure what alternative is possible without pre-awareness of the possibility of composition#2022-01-2613:16favilaYou could have a transact wrapper which inspects tx data for tx fns it knows how to coalesce. You could wrap the separate txs that you want to combine in a tx fn that applies each sequentially with d/with, extracts a combined result, and transacts that. If modifying datomic is possible, this could be an inbuilt feature—similar to db/ensure, there could be a special tx fn that is only executed with the result db after all other txs are applied and is allowed to emit more commands, and the combined result is applied. This would be handy for keeping aggregates up to date. Although figuring out the compositional semantics of this and avoiding infinite recursion would be a challenge. All of these that simulate multiple txs in the transactor would probably be a significant performance penalty #2022-01-2613:19favilaThe reason you can do this in sql (assuming your isolation level is configured correctly) is because the database is mutable and there are implicit (often virtual) locks being acquired as you work. Transaction fns are just expanding commands locklessly, there is literally no new db value to read until the entire transaction is applied atomically#2022-01-2614:27cmdrdatsye, I was optimistically wishing for it to reduce over the db value and tx function (kinda like a d/with-tx vibe) as it goes through the transaction... that probably opens up a whole other can of worms...#2022-01-2614:28cmdrdatsI'll probably have to do a pre-tx scan for the functions and run a combination function like you mention.... still - painful xD#2022-01-2614:36refset> I was optimistically wishing for it to reduce over the db value and tx function (kinda like a `d/with-tx` vibe) as it goes through the transaction
it's probably worth noting that this is how DataScript (and others) behave, i.e. ordering of the ops within the tx is important#2022-01-2615:34JohnJAnd this is why many prefer to just stay in classic RDBMS land 😉#2022-01-2615:36JohnJYou have to implement too much data integrity in your application code with datomic#2022-01-2616:12cmdrdats@U899JBRPF that's interesting to note, thanks!#2022-01-2616:29cmdrdats@U01KZDMJ411 from my experience, something like mysql is even more broken, since they only give you a consistent view of the table/dataset as of query time.. so datomic is already worth the extra thinking work{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 1")}
#2022-01-2616:37JohnJdoes mysql doesn't have read transactions?#2022-01-2617:48cmdrdatsYes, but it only protects after you've queried already.. if you query table A, something changes table B, then query table B, you see the changes even with read tx.. this caught me out, for a long time i thought mvcc will do same as datomic stable read value#2022-01-2619:33potetm@U050CLJ53 I’m really confused. Wouldn’t a [:db/add e :stock/qty 2] work?#2022-01-2619:35potetmThat gist is getting around a fundamental design decision of datomic: Datomic is designed for read-heavy workloads by making heavy use of caching. That gist was trying to turn datomic into a read-once thing.#2022-01-2619:35cmdrdatsYes, it would, but that's a contrived example - in practice, the two callsites adding to the transaction are not connected#2022-01-2619:36favila(d/transact conn (concat (tx-data-fn1) (tx-data-fn2))), where tx-data-fn1 and 2 independently do their own CAS for example.#2022-01-2619:37cmdrdatsNice concise example @U09R86PA4 😀#2022-01-2619:37potetmYeah, I mean. I get the ask. It fundamentally violates tuple logic, right?#2022-01-2619:37potetmidk maybe an overstatement on my part#2022-01-2619:38potetmAt any rate, it still seems like a trivial code-design thing. Accumulate that number that you want to inc before making your tuples.#2022-01-2619:38potetmDoesn’t seem particularly onerous to me.#2022-01-2619:38potetmI know, I know… old code, existing codebase… 😄#2022-01-2619:38cmdrdatsMy use case involves a stock management system#2022-01-2619:39cmdrdatsMoving stock from all over the place to other places, with qty's et al..#2022-01-2619:40cmdrdatsSo the function is a bit more complex than that, but the limitation fundamentally boils down to me expecting transactor functions to work differently than they do#2022-01-2619:40potetmRight. It’s not that you can’t work around it. It’s that you just didn’t expect it.#2022-01-2619:41cmdrdatsYe - i can do whatever just before the transact in middleware if i need to, but it's unfortunate#2022-01-2619:41potetmI’ve had to make pretty involved tx fns before to do those sorts of operations atomically. You end up with a tx fn for basically every domain operation.#2022-01-2619:42cmdrdatsAnd messy to convey to other developers is they need to implement this kind of thing.. thankfully it's few an far between#2022-01-2619:42potetme.g. [:mv-stock from to amount]#2022-01-2619:43potetmand that emits a whole pile of tuples^#2022-01-2619:43favilait adds an additional constraint: :mv-stock cannot appear twice in the same tx#2022-01-2619:44favilathat’s the part that can be surprising. you can put all this work into making :mv-stock atomic and then have it silently do the wrong thing#2022-01-2619:45potetmRight, that’s a real gotcha. That’s fair.#2022-01-2619:45favilathe nonce is a belt-and-suspenders technique to avoid that. if you can cheaply express valid end state in an unparameterized way with :db/ensure , that would be another{:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 1")}
#2022-01-2619:47potetm@U050CLJ53 @U09R86PA4 Thanks for walking me through that! Lord knows you didn’t have to explain it, but you helped me fully understand what the problem was.#2022-01-2809:25octahedrionplease don't use the word "nonce"#2022-01-2810:12cmdrdatshuh?#2022-01-2810:14cmdrdatsI see. it's slang in different parts of the world. https://en.wikipedia.org/wiki/Nonce_word is the specific usage here.#2022-01-2606:48cmdrdatstransactor functions have suddenly lost 99% of their usefulness to me#2022-01-2618:10wilkerluciohello, when using Datomic Cloud, how do you run local tests? my idea of approach was to use datomic.api for testing (in-memory) and datomic.api.client for prod, but doing so adds an overhead that I have two different API's do deal with (from each namespace), is there a way to use a single API against both Cloud and on-prem? or that's something I have to create myself? or is there another approach to handle this?#2022-01-2618:24kennyI believe the client api is intended to be a single api for cloud and on-prem. The on-prem datomic.api has very different behavioral characteristics that I do not think should be abstracted over.#2022-01-2618:28wilkerluciothanks, and I just found the answer for the local dev thing: https://docs.datomic.com/cloud/dev-local.html{:tag :div, :attrs {:class "message-reaction", :title "point_up_2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👆")} " 1")}
#2022-01-2618:53César OleaMaybe relevant: https://github.com/ComputeSoftware/dev-local-tu we use it for unit tests along with dev-local. Very useful!#2022-01-2618:54César OleaThanks for dev-local-tu @U083D6HK9!{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 1")}
#2022-01-2814:48ilshadHello! I’m going to finally upgrade the cloud from 781 to 931. I’ve already tested our ions app with 931 in a separate account, upgraded code to work with API Gateway and cloudfront / custom domains - it’s all right. The question is: I want to change the name of our primary compute group 🙂 so instead of upgrading both stacks, I’m going upgrade only storage stack, and delete the primary compute stack and create a new primary (with new name). Are there any troubles to expect? (i.e. some AWS resources won’t be deleted and naming troubles may occur, etc.)?#2022-01-2814:51jaretAs long as you are re-naming compute this should be fine. Primary can be named whatever you would like and pointed at the system storage template.#2022-01-2814:54ilshad@U1QJACBUM Thanks!#2022-01-2814:55jaretShoot me an e-mail if you run into anything <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> 🙂#2022-01-2814:56ilshadok)#2022-01-3102:07denikwhat’s the best way to find entities without db.type/ref relationships? in a graph these could be called orphan nodes.#2022-01-3107:06thumbnailYou could query all ref-type attributes, and construct a query with a missing? clause for all of them I think?#2022-01-3115:54Adam Lewismissing? still needs an e ... this would only tell you what entities don't have an outgoing reference, but I think @U050CJFRU is asking how to find all entities for which there is no incoming reference, across all ref type attributes{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2022-01-3115:55Adam LewisOff the top of my head, I think you need to do a full index scan to determine this. Scan through EAVT and VAET together. For each E in EAVT see if there is a corresponding V in VAET#2022-01-3116:00Adam Lewison either index, once you get a "hit" you can skip fetching all the intermediate datoms by seeking to e + 1#2022-01-3117:02denikI solved it this way
(def refs
(into #{}
(comp
(keep (fn [[k v]]
(when (= (:db.type v) :db.type/ref)
k)))
(mapcat (juxt identity sdu/reverse-ref)))
(get-schema)))
(into #{}
(comp (map (comp entity :e))
(remove #(some % refs)))
(datoms :ave :ident))#2022-01-3117:05Ivan FedorovAre there any resources on implementing the internal “auto-increment” ids or their analogues on Datomic Cloud?
I want something handy for e-commerce staff to use, so these aren’t primary ids#2022-02-0918:31Daniel JompheHi Ivan, did you find an answer? We're seeking the same thing here also.#2022-02-0918:33Daniel JompheHas anyone implemented or documented how to maintain sequential, auto-increment IDs with Datomic Cloud?#2022-02-2213:05Ivan Fedorov@U0514DPR7 I did this using transaction functions#2022-02-2213:05Ivan FedorovAt your own risk#2022-02-2213:07Ivan FedorovCombine this
https://gist.github.com/favila/f33518c7e72a4079b5948d2f853053b0
And this
https://docs.datomic.com/cloud/transactions/transaction-functions.html#creating#2022-02-2219:53Daniel JompheThanks Ivan!#2022-01-3118:36tony.kayIn Datomic cloud, if we change an attribute to noHistory true, will old segments that have old values in them ever be re-written? I have a system I'm working with that was using large strings. They've stopped doing that and retracted them all, but since there is no excision there isn't a real way to compact the old fragmentation as far as I know...wondering if the noHistory flag will help at all? I think that the segments are immutable, so I kind of doubt that it's going to fix "fat segments" that have large strings....but anyway, just fishing for possible things that might help#2022-01-3118:44favilaif it’s anything like on-prem, noHistory will not proactively remove history for that attr, merely omit it from future segments that happen to involve it.#2022-02-0918:33Daniel JompheHas anyone implemented or documented how to maintain sequential, auto-increment IDs with Datomic Cloud?#2022-02-0112:39timoHow do I do that?
(d/q '[:find ?e ?contract-end
:in $ ?date
:where
[?e :contract/end ?contract-end]
[(< ?date ?contract-end)]]
conn
(java.util.Date.)))
I am trying to get only entities that have a date that is later than the one passed. But I am getting ClassCastException 😞#2022-02-0112:47pyryhttps://docs.datomic.com/on-prem/query/query.html#calling-instance-methods#2022-02-0112:51pyryJust evaluating (< (Date.) (Date.)) in the repl gives a ClassCastException, so no wonder if Datomic does as well.#2022-02-0112:55pyryShould be able to do something like [(.before ?date ?contract-end)] though.#2022-02-0112:57timoright, thanks. That seems more correct.#2022-02-0112:58timoit's just giving me still CCE... :thinking_face:#2022-02-0112:58timobut probably the problem is elsewhere... the entity value is an inst though#2022-02-0113:00favilaDatomic supports comparison on dates.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2022-02-0113:01favilaIs conn a connection or db?#2022-02-0113:03timo(def older (Date.))
(d/q '[:find ?contract-end => #{[#inst "2022-02-01T12:56:01.393-00:00"]}
:in $ ?date
:where
[?e :contract/end ?contract-end]
[(> ?date ?contract-end)]]
[[1 :contract/end (Date.)]
[2 :contract/end older]]
(Date.)))
This works, that means there is somewhere a non-inst in the query 🍌#2022-02-0113:03timothanks!#2022-02-0113:18pyryRight, sorry about that false lead.#2022-02-0113:25timono worries#2022-02-0113:27favila@U4GEXTNGZ I think your class cast may be about trying to cast a connection to a db, nothing about dates. Your db var is named conn which is fishy#2022-02-0113:27favilaWhat is the actual exception?#2022-02-0113:28timono sorry about that... I am using d/db to get the conn... I am used to use datahike and there usually it's a connection that is used to query.#2022-02-0113:29timothere was actually a (get-else in the query that I did not recognize as the problem here#2022-02-0113:30timoit was giving me a string when there is no contract-end there#2022-02-0113:42souenzzo> it was giving me a string when there is no contract-end there
This is wired. when it don't have contract/end, it should simply don't match#2022-02-0113:48souenzzo(require '[datomic.client.api :as d])
(let [conn (-> {:server-type :dev-local
:system "hello"}
d/client
(doto (d/delete-database {:db-name "hello"})
(d/create-database {:db-name "hello"}))
(d/connect {:db-name "hello"})
(doto (d/transact {:tx-data [{:db/ident :contract/id
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
{:db/ident :contract/end
:db/valueType :db.type/instant
:db/cardinality :db.cardinality/one}]})))
{:keys [db-after]} (d/transact conn
{:tx-data [{:contract/id "1"}
{:contract/id "2"
:contract/end #inst"2000"}
{:contract/id "3"
:contract/end #inst"3000"}]})]
(d/q '[:find ?e ?contract-end
:in $ ?date
:where
[?e :contract/end ?contract-end]
[(< ?date ?contract-end)]]
db-after
(java.util.Date.)))
=> [[83562883711053 #inst"3000-01-01T00:00:00.000-00:00"]]
#2022-02-0113:48timosee
(d/q '[:find ?contract-end => #{[#inst "2022-02-01T12:56:01.393-00:00"]}
:in $ ?date
:where
[?e :contract/end ?contract-end]
[(> ?date ?contract-end)]]
[[1 :contract/end (Date.)]
[2 :contract/end older]
[3 :contract/end "-"]]
(java.util.Date.)))
results in
; eval (current-form): (d/q '[:find ?contract-end :in $ ?date :where [?e :...
; (err) Execution error (ClassCastException) at (REPL:1).
; (err) null
#2022-02-0113:49timoyeah, sorry. there is a get-else that tripped me off. it inserted a string where there should be an inst.#2022-02-0113:50souenzzoto use datomic.api (peer api) you can change to
conn (-> "datomic:"
(doto d/delete-database
d/create-database)
d/connect)
need to remove the {:tx-data map too#2022-02-0113:51souenzzoYou are creating an invalid database:
[3 :contract/end "-"]#2022-02-0208:36markgdawsonI'm running a local dev version of datomic. Is there a way to roll it back to an arbitrary previous database state? i.e. (lossily) discard datoms more recent than a given t. It seems like a simple thing to do conceptually, but I can't find any support for it.
I tried using datomic restore-db "backup-src" "backup-dest" t but it seems to work only with the t taken as a backup. I'd like to roll back to the database at a previous t. Is that supported?#2022-02-0215:52emccueWhy not just retract all the datoms in a certain time range?#2022-02-0217:21jaretRetractions are intended for cancelling the effect of an assertion. Dissociating an entity from a particular value of an attribute. The history of your retraction is still present in the history DB. I am not saying that doesn't meet Mark's requirements merely pointing out I would not immediately reach for retraction.#2022-02-0217:22jaret@UDC7GA4QG You can view the available "t" values for restore for a given backup with the list-backups command, documented here:
https://docs.datomic.com/on-prem/operation/backup.html#listing-backups
You're correct that you can only restore to the points in time at which a backup was taken.#2022-02-0217:23jaretAnd I do want to clarify this is only available in on-prem. You mentioned "local dev version," I am unsure if you are using Datomic on-prem with a dev protocol DB.#2022-02-0218:48markgdawsonYeah, I've got backups and restoring working as you describe. Buy what I'd really like to be able to do is to roll back (i.e. lose some of the most recent datoms). We do have features that use history. So retracting wouldn't be strictly equivalent, as you suggest. Shame the isn't a way to roll back to arbitrary points in time without speedy having taken a backup at that point.#2022-02-0219:33jaretMay I ask how large your DB is in total Datoms?#2022-02-0220:56markgdawsonInteresting. I could re-transact all the datoms based on d/log. Is that what you're thinking? That might work...#2022-02-0220:58steveb8nThis works for me https://github.com/vvvvalvalval/datomock#2022-02-0223:08jaretMark, that is what I was thinking. If it's small enough transacting from the log pointed at a new DB would be useful for wanting to create a test DB to a certain T etc. But perhaps Datomock is a better fit.#2022-02-0223:09jaretI'd also point out that in Cloud (I know you're in on-prem), dev-local is a great tool for this sort of thing as you can import a cloud DB with a filter spec on t values. https://docs.datomic.com/cloud/dev-local.html#import-cloud#2022-02-0307:21markgdawsonThat looks really nice @U0510KXTU! This is a very neat concept. I really like this and will be sure to use it. It's a shame that it won't accept a database value returned by as-of (https://docs.datomic.com/client-api/datomic.client.api.async.html#var-as-of), because that would be incredibly handy. A quick test seems to suggest that https://pastebin.com/Jt61A8iP. So it still doesn't allow you to do something equivalent to appearing to roll back a database connection.
Still, this is a very nice find. Thanks.#2022-02-0307:23steveb8nI use it for generative testing. seed a db and then reset back to that seed for each invocation of the fn. speeds up the tests a lot!#2022-02-0307:25markgdawsonYeah, I bet. I can see a lot of use cases for it. Very nice library.#2022-02-0311:55markgdawsonIn case anyone is interested, the docs saying as-of and with aren't compatible:
https://docs.datomic.com/on-prem/time/filters.html#as-of-not-branch#2022-02-0222:03rgorrepatiHi, We are running on-prem, on aws dynamodb. We only have temporary credentials in AWS, and thus have to use AWS_SESSION_TOKEN. Setting it via environment variable and connecting to datomic works, but setting it via connection string like "datomic:" doesn't seem to work. Is that expected?#2022-02-0309:18BenjaminJo I have a tx (eid) and now I'd like to say as-of 1 before that. Use case is I know a "bad" tx and want to check the db right before I made it.#2022-02-0311:41souenzzoYou can use dec on it.
https://gist.github.com/souenzzo/eb3753302d5af047346ac1c510500e69#2022-02-0313:36Benjaminsweet{:tag :div, :attrs {:class "message-reaction", :title "bananadance"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "bananadance", :src "https://emoji.slack-edge.com/T03RZGPFR/bananadance/5394a2df1be70a15.gif"}, :content nil})} " 1")}
#2022-02-0313:39BenjaminWith the client api. Let's say I like pull syntax and I want to pull a lot of entities, do I still use the q ? Because I'm naively querying eids and then pulling them one by one. It's basically just for the convenience of building a map out of the data#2022-02-0313:44favilaCould you be more specific? Maybe show some code that’s “I do this” vs “should I do this instead”?#2022-02-0314:04Benjamin;; I do roughly this
(->>
(d/q
'[:find
?e
:where
[?e :bot.discord/thread-id ?thread-id]]
db)
(map first)
(map
(fn
[e]
(d/pull
db
'[:bot.discord/user-id
:bot.discord/thread-id
{:bot/thread-user
[:bot.discord/user-id]}]
e))))
;; and I wonder if I should do this
(->>
(d/q
'[:find ?id ?user-id
:where
[?e :bot.discord/thread-id ?id]
[?e :bot/thread-user ?user]
[?user :bot.discord/user-id ?user-id]
;; ... more
]
(get-db))
(map
(fn [[id user-id]]
{:bot.discord/thread-id id
:bot.discord/user
{:bot.discord/user-id user-id}})))#2022-02-0314:05BenjaminThe inconvenient part is building the map in the second part of the second example#2022-02-0314:08favilathese are slightly different. The second one will never produce a nil/absent user-id, the first will#2022-02-0314:09Benjaminyea actually I realized that is also a constraint for my use case. I do want the nil#2022-02-0314:09Benjaminaside from that I found the :keys second arg to q#2022-02-0314:10favilayou should probably be doing something like this#2022-02-0314:10favila(->> (d/q ; or qseq
'[:find (pull ?e pull-expr)
:in $ pull-expr
:where [?e :bot.discord/thread-id ?thread-id]]
db
'[:bot.discord/user-id
:bot.discord/thread-id
{:bot/thread-user
[:bot.discord/user-id]}])
(map peek)){:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2022-02-0314:10favilaevaluate the pull in the query, possibly parameterize the pull#2022-02-0314:12Benjaminah sweet I didn't know yet I can use pull in find#2022-02-0314:13Benjaminqseq returns a lazy seq right that's the difference?#2022-02-0314:13favilausing pull in query is more important on client api than on-prem. On the client api, each pull is another (possible) network round-trip#2022-02-0314:13favilaqseq evaluates the result set eagerly, but does the pull only as it’s consumed#2022-02-0314:14favilait reduces latency to time-to-first-result, and can avoid doing IO for results you don’t consume#2022-02-0314:14Benjamincool thanks#2022-02-0314:24Benjaminyour instinct to parameterize the pull-expr was really nice because I could plug my old code into it, beautiful#2022-02-0320:23jdkealyWhat happens if i configure a transactor using a host and alt-host and then i want to move the host / alt host ?
For example, i start the transactor, it writes its location to storage, then I use storage and transactor in prod, then someone for whatever reason wants to switch the transactor host.
If I start a new transactor with a new host / alt host, will it overwrite its location in storage? If not, how do i get around this ? Would you go into postgres/dynamo/whatever and update the map with the new transactor host ?
One such reason might be, you used the public hostname for an EC2 instance and then you needed to restart and you got assigned a new address, or you were using a CNAME and forgot to renew your registration in godaddy and lost the host name.#2022-02-0320:44jaretJdkealy, host and alt-host are written by the transactor to storage. Peers attempt to connect to host and then alt-host. You should be able to launch a new transactor serving the same system with new host/alt-host values (that transactor will be in standby mode) and then kill your old transactor and HA failover will make the standby the active transactor and it will write it's value to storage to allow peers to locate it. Peers will then connect provided they have the proper permissions to connect there. https://docs.datomic.com/on-prem/operation/deployment.html#upgrading-live-system#2022-02-0320:45jaretIn this scenario your sql-url will not change so you are still running a transactor against that storage, but the transactor will have new host/alt host values#2022-02-0320:46jarethttps://docs.datomic.com/on-prem/operation/deployment.html#peer-fails-to-connect#2022-02-0400:16jdkealy🙏#2022-02-0411:28neilprosserI had a search in this channel but couldn't find anything which looked like it matched. Does anyone know the answer to this question: https://ask.datomic.com/index.php/651/authenticate-authorize-access-gateway-endpoints-datomic? We're successfully using IAM Authorizers against our Ion Gateway APIs in Cloud but I've hit the same problem trying to do the same thing for a Client API Gateway.#2022-02-0415:13jarrodctaylorThe expectation is no additional auth efforts are required for the client api
https://docs.datomic.com/cloud/operation/access-control.html#how-datomic-access-control-works#2022-02-0415:39neilprosserSo when the Client and Ion API Gateways have just been created they're open to the world on the execute-api URL. I've added an IAM authorizer to the API Gateway and then had to start signing my requests which we're making via an HTTP client from Google Cloud. I tried the same thing on a Client Gateway and I get a 403 while the client is attempting to retrieve the S3 auth details (judging by the stack trace). Should I not have to do that?#2022-02-0415:44neilprosserHave I just made things more difficult for myself by adding the IAM authorizer to the Client API Gateway and everything is secure without it?#2022-02-0418:34jarrodctaylorCould you provide more details about what you are concerned about being open to the world?#2022-02-0419:19neilprosserI just switched ClientApi to yes in a query group. If I curl ClientApiGatewayEndpoint (taken from the CloudFormation outputs) straight after creation of the Client API Gateway I can see {:s3-auth-path "system-blahblahblah"}. If I switch on an IAM Authorizer I see {"message":"Forbidden"} via curl which makes sense since I'm not signing the request. However at this point using the Client API from the library I go from being able to get a client locally using my AWS creds in environment variables to getting a 403 (stacktrace points to datomic.client.impl.cloud$get_s3_auth_path.invokeStatic (cloud.clj:179)). My concern was that before the IAM Authorizer was switched on that root path on the Client API Gateway is open. I just wanted to confirm that we don't need the IAM Authorizer and it's fine because that first resource is publicly available but subsequent requests are using my credentials.#2022-02-0419:23neilprosserSince I have no idea about the other paths the client is querying I took it that that root path being open meant everything else was open. We've been using Ions via API Gateway and after upgrading to 884 had to add IAM Authorizers to those to prevent them being unauthenticated which looks like it was the default state.#2022-02-0502:41jarrodctaylorAs the docs say All Client API requests to Datomic Cloud use SSL, and authenticate via AWS HMAC-SHA256 signatures. so no additional auth is required for the client.
The ion gateway is the part that is up to you. No decisions have been made for you there and you can configure authentication (or not) for what you build and deploy there as needed. I wrote a http://www.jarrodctaylor.com/posts/Cognito-Authentication-For-Datomic-Cloud/ post covering some ways of accomplishing that#2022-02-0509:12neilprosserThanks for confirming. Maybe makes sense to post something official as an answer to that question I linked at the top if two people have independently gone down the same unnecessary path.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-02-0713:09Dimitar UzunovHello I get an error in the transactor and a peer on the same server fails to connect. What could this error mean ?
2022-02-07 10:33:27.910 ERROR default o.a.activemq.artemis.core.server - AMQ224095: Error updating Consumer Count: Tried to decrement consumer count below 0: QueueInfo [routingName=admin.response6200f577-f2bb-4e0c-a6df-cc07a5c2c372, clusterName=admin.response6200f577-f2bb-4e0c-a6df-cc07a5c2c3727b3bd979-8800-11ec-b5ca-065204aa7e2f, address=admin.response, filterString=null, id=174, filterStrings=null, numberOfConsumers=0, distance=0]#2022-02-0715:29Dimitar Uzunovthe error went away after I upgraded to the latest datomic#2022-02-0715:29Dimitar Uzunovwhich has a newer version of activemq#2022-02-0719:09Dimitar UzunovAny plans to support Java 17? The website says 8 is required but there are other docs that say 11 is supported. Peers couldn’t connect to version 17 in our environment#2022-02-0719:47favilaI’d like to know the answer to this too. Peers seemed to run on 17 fine, and the transactor started up with 17 without error and seemed to be running, but no peer could connect to the transactor.#2022-02-0719:47favilaWe are running transactors and peers on Java 11 currently#2022-02-0719:48ghadiwas there an error thrown?#2022-02-0719:49favilaonly from the peers saying they could not connect#2022-02-0720:05jaret@U09R86PA4, we are evaluating and looking at JDK 17/Java 17 for the next maintenance release. Usual caution that I don't have a timeline, but it's on our near term todo list. (CC @ULE3UT8Q5){:tag :div, :attrs {:class "message-reaction", :title "thanks3"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "thanks3", :src "https://emoji.slack-edge.com/T03RZGPFR/thanks3/868be8344387d7f0.gif"}, :content nil})} " 3")}
#2022-02-0720:38JohnJartemis seems to be the culprit, latest version supports java 17{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2022-02-0811:59kipzHi all. I'm having a problem querying through time! Any help much appreciated.
Say I want to pass two databases to some query. One is the current database $ and one is before $before. This seems to work for some simple clauses:
[$before ?version :vulnerability.advisory.version/vulnerable-range ?range]
but using or-join for example:
(or-join [?version ?range]
[$before ?version :vulnerability.advisory.version/vulnerable-range ?range])
we get an error:
{:cognitect.anomalies/category :cognitect.anomalies/incorrect,
:cognitect.anomalies/message
"processing clause: [$before ?version :vulnerability.advisory.version/vulnerable-range ?range], message: Cannot resolve key: $before"}
I've tried passing the database in arguments to the or-join, but the error is similar:
{:cognitect.anomalies/category :cognitect.anomalies/incorrect,
:cognitect.anomalies/message
"processing clause: [$before ?version ?range], message: Cannot resolve key: $before"}
Am I doing something wrong here? Is there another way to pass databases around?#2022-02-0812:48Linus EricssonAccording to the documentation the syntax for or-join is: or-join-clause = [ src-var? 'or-join' rule-vars (clause | and-clause)+ ]
so maybe try
($before or-join [?version ?range]
[?version :vulnerability.advisory.version/vulnerable-range ?range])
#2022-02-0812:48Lennart Buitif you name your rule, you can pass it before the rule name:
($my-database rule ?arg ?arg2 ?arg2)
#2022-02-0812:49Lennart Buitbut you can never pass more than one database value to a rule#2022-02-0812:49Linus Ericsson(the syntax is available here: https://docs.datomic.com/on-prem/query/query.html#query )#2022-02-0813:29kipz@UDF11HLKC thanks, but I'm not using a rule here (at least not one I can control)#2022-02-0813:29Lennart BuitYou are, or-join is just an anonymous rule; so the same limitation counts for the syntax Linus shared#2022-02-0813:30kipzThanks @UQY3M3F6D - I found similar docs for cloud here: https://docs.datomic.com/cloud/query/query-data-reference.html#or-join#2022-02-0813:31kipzThe problem is that I need a different database for different clauses of the or-join - hence my attempt there. Sounds like I need to create my own function to do this.#2022-02-0813:32Linus EricssonYes, I think that will need a separate function or separate queries.#2022-02-0813:33kipzThanks again both.#2022-02-0814:07kipzFound a reference to the same issue here: https://forum.datomic.com/t/database-cant-be-passed-explicitly-for-or-or-join-clauses/1373/5#2022-02-0917:50Daniel JompheHi, using Datomic dev-local:
(def db-hist (d/history (d/db conn)))
results in:
; Execution error (IllegalArgumentException) at datomic.client.api.protocols/fn$G (protocols.clj:126).
; No implementation of method: :history of protocol: #'datomic.client.api.protocols/Db found for class: datomic.dev_local.impl.DurableConnection
Could it be that dev-local does not support history queries?#2022-02-0918:05jarrodctaylorThat functionality is supported in dev-local. Can you provide a full reproduction of steps to get to your current error state?#2022-02-0918:06Daniel JompheSorry, we're starting to realise (soon confirmed) that we were passing in a conn instead of a db (my code example above is not our real code call site).{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-02-0918:18Daniel JompheConfirmed. Our error was calling it thusly e.g. (d/history conn) instead of (d/history db)#2022-02-0918:38ghadiUse a transaction function {:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 1")}
#2022-02-1000:10favilaDoes the datomic-pro distribution really need all the aws-java-sdk- service libs in its lib/ directory? That’s over 200mb of jars, and the distribution has been getting fatter and fatter with each release (now 414MB!)#2022-02-1012:09Adam Lewispro peer pulls in about 28MB worth of jars, and only ec2 and core from AWS, so I'm guessing no (granted, the storage APIs are all provided dependencies, so I should add DynamoDB).
I know there is more stuff in the full distro (including a complete PrestoSQL / Trino distribution -- the biggest size jump when it was added), but I can't imagine the AWS Chime SDK is needed#2022-02-1012:10karol.adamiecyou would think that cognitect aws api could replace this, but it does not support AWS_CA_BUNDLE 🙃
and this is widespread enough infra pattern that it would harm Datomic i think... Needs to get patched, but not much going on other than hanging ticket:
https://github.com/cognitect-labs/aws-api/issues/127#2022-02-1015:38JohnJI guess everything is being pulled because is more convenient and because various of those are used for cloud#2022-02-1016:18JohnJI find it more annoying that the peer has a hard dependency on memcache-asg-java-client though, which pulls a very old aws-java-sdk-ec2#2022-02-1016:18JohnJthere's also a hard dependency on h2#2022-02-1008:32cl_janybody knows why (d/pull db [:node/some-attr :db/id] eid) pulls the expected attribute values from the entity, but (d/pull db [:db/id] eid) pulls everything from the entity?#2022-02-1015:01kennyI've encountered this too. It seems like a bug. We do have an open support ticket on this. Maybe we should create an ask.datomic so others can vote on this too. #2022-02-1016:48kenny@U53B6QVDX fyi, I've opened an ask.datomic for exposure: https://ask.datomic.com/index.php/703/pulling-db-id-will-pull-the-entire-entity{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2022-02-1017:03favilaanother :db/id-related oddity is that it can’t be renamed (attr-spec is ignored)#2022-02-1017:06kennyYep. We have an open support ticket on that one too 😅#2022-02-1017:00kennyCan we alter the JVM launch params for Datomic Cloud to include -XX:-OmitStackTraceInFastThrow? We're getting a "java.lang.NullPointerException with empty message" back from a Datomic transaction issuing a call to a custom transaction function. There's a bug in that function, and a line number would be very helpful to debug.#2022-02-1019:40kennyLooks like you could be naughty and modify the CF template at Parameters.Datomic.defaults.JvmFlags#2022-02-1019:43kennyAlternatively, it seems like you could ignore "Leave blank unless directed by Datomic Support." and modify OverrideSettings to be set to "export JVM_FLAGS="${JVM_FLAGS} -XX:-OmitStackTraceInFastThrow"#2022-02-1020:13Joe LaneWhy don't you just use dev-local or cast within the tx-fn to investigate this problem @U083D6HK9?#2022-02-1020:22kennyThis DB is 4b datoms and would take weeks to import with dev-local.
Casting would work. I'd prefer having NPE stacktraces upfront though. We don't use NPEs for control flow so disabling that flag will have little to no perf impact. #2022-02-1020:25kennyMaybe Datomic does though and this could have adverse effect on Datomic then?#2022-02-1020:25Joe LaneThat's a big presumption about perf and you modifying things like that puts your system squarely in Undefined Behavior territory.#2022-02-1021:49kenny:man-shrugging::skin-tone-3: Test & measure#2022-02-1021:54kennyWe already run in prod with that flag, so it'd mostly be a matter of Datomic perf impact. Seems straightforward to measure from the CW dashboard. #2022-02-1022:23kennyOut of curiosity, have you encountered a situation where setting that flag had an adverse impact on perf? If so, what was the area? I've seen some compilers use exceptions for control flow and would imagine this flag causing poor perf for them. #2022-02-1118:10jaretWe haven't encountered a situation where setting the flag had an impact on performance, but we also rarely ever have anyone set flags. This is to my knowledge in support.#2022-02-1118:57kenny👌:skin-tone-2:#2022-02-1617:46neilprosser@U083D6HK9 Hopefully you don't mind me digging this thread up but did you have any luck getting the stack trace to appear. I've tried the JVM_FLAGS suggested above and have transactions which are failing with a NullPointerException before hitting any of my transaction functions.#2022-02-1019:47Benjamin(d/with
(get-db)
{:tx-data
[{:bot.discord/user-id "foouser"}]})
Hi, what do I do wrong? Is arg-map not the same as for transact?#2022-02-1019:47kennylgtm. What's the problem?#2022-02-1019:48Benjamin1. Unhandled java.lang.ClassCastException
class datomic.core.db.Datum cannot be cast to class
java.lang.Number (datomic.core.db.Datum is in unnamed module of
loader 'app'; java.lang.Number is in module java.base of loader
'bootstrap')
tx.clj: 397 datomic.dev-local.tx/datom-lookup-valfn
tx.clj: 397 datomic.dev-local.tx/datom-lookup-valfn
local_log.clj: 56 datomic.dev-local.local-log.LocalLog/valAt
RT.java: 760 clojure.lang.RT/get
btindex.clj: 281 datomic.dev-local.btindex.BTIndex/cons
RT.java: 677 clojure.lang.RT/conj
core.clj: 87 clojure.core/conj
core.clj: 84 clojure.core/conj
db.clj: 2322 datomic.core.db.Db/addData
db.clj: 3353 datomic.core.db/add-ensured-data
db.clj: 3351 datomic.core.db/add-ensured-data
db.clj: 3370 datomic.core.db/with-tx
db.clj: 3357 datomic.core.db/with-tx
db.clj: 2164 datomic.core.db.Db/with
local_db.clj: 67 datomic.core.local-db/fn
local_db.clj: 24 datomic.core.local-db/fn
protocols.clj: 126 datomic.client.api.protocols/fn/G
api.clj: 363 datomic.client.api/with
api.clj: 353 datomic.client.api/with
REPL: 116 user/eval22254
REPL: 116 user/eval22254
Compiler.java: 7181 clojure.lang.Compiler/eval
Compiler.java: 7136 clojure.lang.Compiler/eval
core.clj: 3202 clojure.core/eval
core.clj: 3198 clojure.core/eval
interruptible_eval.clj: 87 nrepl.middleware.interruptible-eval/evaluate/fn/fn
AFn.java: 152 clojure.lang.AFn/applyToHelper
AFn.java: 144 clojure.lang.AFn/applyTo
core.clj: 667 clojure.core/apply
core.clj: 1977 clojure.core/with-bindings*
core.clj: 1977 clojure.core/with-bindings*
RestFn.java: 425 clojure.lang.RestFn/invoke
interruptible_eval.clj: 87 nrepl.middleware.interruptible-eval/evaluate/fn
main.clj: 437 clojure.main/repl/read-eval-print/fn
main.clj: 437 clojure.main/repl/read-eval-print
main.clj: 458 clojure.main/repl/fn
main.clj: 458 clojure.main/repl
main.clj: 368 clojure.main/repl
RestFn.java: 1523 clojure.lang.RestFn/invoke
interruptible_eval.clj: 84 nrepl.middleware.interruptible-eval/evaluate
interruptible_eval.clj: 56 nrepl.middleware.interruptible-eval/evaluate
interruptible_eval.clj: 152 nrepl.middleware.interruptible-eval/interruptible-eval/fn/fn
AFn.java: 22 clojure.lang.AFn/run
session.clj: 218 nrepl.middleware.session/session-exec/main-loop/fn
session.clj: 217 nrepl.middleware.session/session-exec/main-loop
AFn.java: 22 clojure.lang.AFn/run
Thread.java: 833 java.lang.Thread/run#2022-02-1019:48Benjaminit's dev-local. With divert-system#2022-02-1019:49kennyDoes (get-db) return a db you got from calling d/with-db on a conn?#2022-02-1019:51Benjaminno from d/db#2022-02-1019:51kenny> Applies tx-data to a database returned from 'with-db' or a
> prior call to 'with'.
>
> https://docs.datomic.com/client-api/datomic.client.api.html#var-with#2022-02-1019:51kennyYou've got to pass a with-db#2022-02-1019:51Benjaminah#2022-02-1019:51Benjaminthanks{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 1")}
#2022-02-1116:15kennyIf I receive an anomaly back from the client api of category fault and a :datomic.client-spi/exception key attached to the anomaly map, should I expect to find an exception in the CW logs? e.g., for the below anomaly map, should I expect to find a CW log line with a NullPointerException stacktrace?
{:datomic.client-spi/context-id "dee3c6db-b037-4056-a3b2-059ad6e0a7a6",
:cognitect.anomalies/category :cognitect.anomalies/fault,
:datomic.client-spi/exception java.lang.NullPointerException,
:datomic.client-spi/root-exception java.lang.NullPointerException,
:cognitect.anomalies/message "java.lang.NullPointerException with empty message",
:dbs [{:database-id "07b79939-5cf0-4074-808c-79b735fd2660", :t 134265434, :next-t 134265435, :history false}]}#2022-02-1117:58BenjaminAt what rate of writes is datomic not really suitable anymore? Few per second?#2022-02-1118:03Joe Lane@benjamin.schwerdtner much, much more than that.#2022-02-1118:03Benjaminok cool#2022-02-1118:06ghadiIt's not bitcoin{:tag :div, :attrs {:class "message-reaction", :title "nail_care"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💅")} " 1")}
#2022-02-1118:08Adam LewisThere are so many specifics that matter in terms of performance, but for a single point of reference, when we do bulk load jobs sourced from "enterprise line-of-business RDBMS" we see about 2000 transactions per second (txn datom counts are all over the place, we pack one relational row per datomic transaction)#2022-02-1118:09Adam Lewisthis is with transactor running on something like an m5.xlarge and storage in DynamoDB (on-demand)#2022-02-1118:11JohnJis on-demand mode instant?#2022-02-1118:13Adam Lewissort of. ddb has to re-shard to scale up, I believe on-demand mode can instantly handle write volumes twice as high as it has previously seen on that table#2022-02-1118:15Adam LewisThe DDB docs cover it in more detail: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html#HowItWorks.OnDemand#2022-02-1119:07JohnJlooks like indexing can cause trouble there, is it normal for indexing to double your writes? or maybe 3x-10x times?#2022-02-1120:18Adam LewisI'm looking at some metrics here, it looks like index jobs correlate to a 3x increase in write capacity unit consumption...that factor (3) is suspiciously identical to our transactor's write concurrency, so a beefy high-concurrency transactor instance might produce different results. But from a DDB on-demand standpoint it doesn't matter, since its instantaneous capacity is 2x the max ever seen. I guess it means should wait 30 minutes between write concurrency doublings in a transactor scale-up scenario.#2022-02-1120:46JohnJthx, OTOH it looks like datomic would have a very hard time trying to wear out a postgres table (on a vm with a fast SDD), likely would require dozens of peers and high writes#2022-02-1121:10Adam LewisYes, and may also be more cost effective for predictable intense write volumes (while making durability your responsibility). The transactor process itself is almost always the limiting factor. Anecdotally the cognitect folks have mentioned to me that they have not found an upper limit of what DDB can handle in terms of read/write throughput.#2022-02-1122:16JohnJgood point about cost, ddb can become very expensive. About durability, even with ddb wouldn't it be safer to also have datomic backups?#2022-02-1122:17Joe LaneJust a heads up, DDB on-demand doesn't instantly scale, not for reads nor writes.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-02-1122:18Joe LaneYou will still get throttling exceptions.#2022-02-1118:10Adam LewisI should say, that is "average" performance over many hour long load jobs. actual performance starts out much higher and then goes down as indexing jobs start to dominate#2022-02-1118:31jacob.maineI’m having unexpected dependency conflicts after a recent upgrade to 939-9127. I wrote up the problem https://ask.datomic.com/index.php/702/mismatch-between-expected-dependencies-dependency-conflicts. Anyone noticed similar problems?#2022-02-1120:10jaretHi @U07M2C8TT I updated with an answer on the ask. We are aware of this problem. Suffice it to say that we understand this issue and the deps-conflict reported on an ion-push is not accurate as the cloud-deps.edn in ion-dev does not match the cloud-deps.edn that is actually running in your version of Datomic Cloud. You should be on the correct and expected dep you saw in your spath. Please let me know if you see otherwise. Hope you are well!#2022-02-1121:49jacob.maineHey @U1QJACBUM! Thanks for the update. I can confirm—within a running Ion my classpath contains the more recent versions of these deps. I’ll ignore the dependency conflicts for now. Hope you’re well too!#2022-02-1121:59steveb8nQ: I’m thinking about using an attribute of type “ref” and cardinality “many”. doesn’t need a sort order. somehow it feels wrong to have a many foreign key. maybe this is my RDBMS habits echoing. What’s good/bad about this? I just want to sanity check my thinking#2022-02-1122:36favilaIn general you should strive to keep cardinality low. So if it’s cardinality-one in the opposite direction in your domain model, I’d say prefer that unless you want isComponent semantics because datomic will keep the invariant for you. But card-many refs in themselves are common and not alarming.#2022-02-1123:21steveb8nthanks. this one will be low cardinality, generally less than 5. you make a good point about using a component attribute instead. I’ll think on that#2022-02-1122:09JohnJsounds pretty standard for datomic#2022-02-1409:01Benjaminion: what is a good way to check the latest succeeded deploy rev? My app could print it as "version"#2022-02-1414:46Joe LaneQuery code deploy#2022-02-1501:16hdenWhat if we are in the middle of a rolling deployment (i.e. multiple versions are running)? How do we know which version did the request hit?#2022-02-1513:05Adam LewisI'm using on-prem, not cloud, but do use code deploy...so this may not perfectly align with your environment, but what we do is populate the https://docs.oracle.com/javase/tutorial/deployment/jar/packageman.html in MANIFEST.MF as part of our build and then read that back in the running process (e.g. logged on startup, added to response headers, etc.). Our codedeploy descriptor includes a "validate" script which checks that the running version matches the one it (thinks) it just deployed. This helps in cases where, e.g., an old version doesn't cleanly exit and codedeploy scripts think they have deployed a new version, but in reality the old version is still serving requests.#2022-02-1605:48steveb8nI have a step in my CI that stores the git sha and tags into an entity in Datomic. used for deploys of more than just Ions. really useful to have that data 1 query away. Didn’t think about querying code-deploy, that’s a good idea too#2022-02-1616:46Benjamingotta check that thanks#2022-02-1501:58shieldsFinally working on the big upgrade for Datomic Cloud and when going to version 939-9127 I'm getting a 502 Bad Gateway response when calling the newly created IonApiGatewayEndpoint .
Any suggestions?#2022-02-1502:21jarrodctaylorCan you provide more detail on what has been done with regard to upgrading? How are you calling the ion endpoint and have you deployed an app that you expect to be able to hit?#2022-02-1502:41shields• Upgrade Process going from 704-8957 -> 939-9127
◦ Upgraded Storage succesfully
◦ Failed to upgrade compute nodes in CF(rollback)
◦ Deleted Compute nodes
◦ Created new nodes successfully with 939-9127 template
◦ Updated deps.edn w/ https://docs.datomic.com/cloud/releases.html#current
◦ Able to get data locally with the new ClientApiGatewayEndpoint
◦ Removed (apigw/ionize app) function
◦ App is using Reitit similar to https://github.com/JarrodCTaylor/ion-cognito-exemplar/blob/main/src/ion_cognito_exemplar/core.clj#L36
◦ Edited ion-config.edn similar to https://github.com/JarrodCTaylor/ion-cognito-exemplar/blob/main/resources/datomic/ion-config.edn
◦ Push/deploy successfully
◦ Grabbed the IonApiGatewayEndpoint and it returns a 502 Bad Gateway response when calling from the browser, with curl, and with Postman.#2022-02-1502:43shieldsWondering if there is more configuration needed in API-Gateway. I see, in the terraform scripts the routes were configured someway.
https://github.com/JarrodCTaylor/ion-cognito-exemplar/blob/main/scripts/main.tf#L88#2022-02-1520:39shieldsUsed your example app @U0508JRJC without the auth endpoints and I'm still getting the error after a successful deploy.
Found this https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-troubleshooting.html#http-502-issues
But not sure how to debug.#2022-02-1520:59jarrodctaylorThe terraform script is only modifying the api-gateway routes to accommodate the public/private routing in the example this isn’t strictly required for any application as a default. The default route created by the cloud formation script will proxy all requests to your app and in your case the reitit router. I would suspect the application as the source of the problem before turning to the LB. A newly created stack will return a 502 if there isn’t an app deployed yet. Can you run the application locally? Are you trying to hit the root ion endpoint or a deeper path in your router?#2022-02-1521:15shieldsThe app runs locally. I've tried both the base endpoint and the deeper paths.
Changes to the app after upgrade:
• Removed (apigw/ionize app) function
• Removed the :integration :api-gateway/proxy lambda from the ion-config.edn
• Edited ion-config.edn to
{:allow [blvd.blvd-app-api.router/app]
:http-direct {:handler-fn blvd.blvd-app-api.router/app}
:app-name "our-app-name"}
#2022-02-1521:18shields;;; Public routes
(defn ping-response [_]
{:status 200
:body {:message "Pong'ing back"}})
(def public-routes
["/public"
["/ping" {:name ::ping
:get {:handler ping-response}}]])
(def app
(ring/ring-handler
(ring/router
["/api/v1" [public-routes]]
{:data {:muuntaja m/instance
:middleware [mw/options-mw
mw/cors-mw
parameters/parameters-middleware
muuntaja/format-middleware]}})
(ring/create-default-handler)))
#2022-02-1522:09jarrodctaylorThat all appears correct, but I assume there is more to the application? If you want to / can share the actual application code I can poke it further <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> or JarrodCTaylor on github. If that is not an option you can enable logging in the api gateway which I don’t expect will report much other than that there was a 502 response issued, but it is something I always do when debugging.#2022-02-1522:57shieldsThanks, I'll keep poking at it tonight and if I don't figure it out will shoot you an e-mail tomorrow.
One last question here, you said we'd get 502 if the app hasn't been deployed. Is there a way to check in the console if it was deployed other than the successful response and in the Step Functions? In the last version the lambda would be created for the proxy, anything similar for http-direct?#2022-02-1617:06shields@U0508JRJC Any answer to that last question. I was able to clone your repo and deploy it to our system and hit the endpoint successfully.
But when I deploy your app from our #polylith repo it doesn't connect. We've changed our deps to match yours. I've had no issue deploying from Polylith before with the ionize functions and they say in their https://clojurians.slack.com/archives/C013B7MQHJQ/p1645027134110639 they've been able to deploy on the latest version. Lambas also are built but the Ions endpoint isn't callable.
Just wondering if the app is being deployed or not and wondering where I can check.#2022-02-1619:00shields@U0508JRJC Sent an email.#2022-02-1702:20jarrodctaylorI will get a replay back to you tomorrow.{:tag :div, :attrs {:class "message-reaction", :title "gratitude-thank-you"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "gratitude-thank-you", :src "https://emoji.slack-edge.com/T03RZGPFR/gratitude-thank-you/49c935d6bf6b6656.png"}, :content nil})} " 1")}
#2022-02-1604:22steveb8nQ: I just upgrade my cloud instances to latest. trying to find a good way to monitor server cpu load. what metric do people use for this?#2022-02-1604:25steveb8nfound it. can use EC2 > CPUUtilization i.e. not a DatomicCloud metric#2022-02-1604:25Joe LaneCPUUtilization should be on the Datomic Cloud Dashboard in CloudWatch#2022-02-1604:35steveb8nthanks. I should have looked there first#2022-02-1608:47Mutasem HidmiHi everyone. I am new to Datomic. I wanted to ask if it's possible to use Datomic to do a relational database implementation. Thank you.#2022-02-1710:19kipzI'm not sure exactly what you're asking. You can model relations and do relational queries with datomic, so in that regard, the answer is yes. But I'm not sure that's what you're getting at?#2022-02-1712:33kipz^^#2022-02-1713:33Andrej GolcovHi,
We have parent-child relations in datomic, something like:
{
:rootId "1"
:child/ids [{:childType "a" :childId "2"} {:childType "b" :childId "3"}]
}
If I want to pull parent data with related children entities, the pull expression looks like this:
[:rootId {:child/ids [:childType :childId]}]
Q: Is there is any way to make pull to retrieve specific child entities e.g. only with type="a"?
Something similar to json-path: $child/ids[?(@.type="a")].childId
thanks.#2022-02-1714:57kipzI don't think there is. I'd normally change the query to return multiple entities and pull specific attributes off each one.#2022-02-1716:20thumbnailI've looked into this in the past as well and couldn't find a way.#2022-02-1717:45Andrej Golcovthanks,{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-02-1805:09jasonjcknHey,
I’m trying to dockerize datomic pro and my setup works just fine merely with a dockerized transactor, i’ll expose ports <tel:4334-4336|4334-4336>, and if I start-up the console not inside a container it works great, e.g
bin/console -p 8080 x "datomic:"
However a problem occurs if launch the console from within a container via docker-compose (also datomic peer, exact same error), for some reason it won’t connect to the transactor over the docker compose network
I get the following error (datomic console)
ActiveMQNotConnectedException.... Cannot connect to server(s). Tried with all available servers....
Same exception with dockerized Peer
transactor.properties
protocol=dev
host=0.0.0.0
port=4334
storage-access=remote
storage-admin-password=changeme
storage-datomic-password=changeme
license-key=XYZ
here’s the command I use in docker compose to launch the console
command: "-p 8080 xyz datomic:"
JDKv8, Datomic Pro 1.0.6344#2022-02-1818:04jasonjcknnevermind, solved it.#2022-02-1806:44onetomhow can i write a datalog query which picks between 2 where clauses, based on the presence of a query argument?
eg:
(d/q '[:find ?e :in $ ?a ?v :where [?e ?a ?v]] db-val :attr "val")
but when ?v is nil, i want all entities, which has :attr:
(d/q '[:find ?e :in $ ?a ?v :where [?e ?a _]] db-val :attr nil)
can i express this in a single query?
i've tried this:
(d/qseq
{:query '{:find [?e]
:in [$ ?a ?v]
:where [(or (and [(some? ?v)] [?e ?a ?v])
(and [(nil? ?v)] [?e ?a _]))]}
:args [db-val :attr val]})
but got this error:
Execution error (ExceptionInfo) at datomic.core.datalog/eval-rule$fn (datalog.clj:1537).
Unable to find data source: $__in__4 in: ($ attrs $__in__3 $__in__4)
i found an example of this error here: https://gist.github.com/stuartsierra/2429063
which shows how it's not allowed to pass in nil arguments to queries, but it doesn't suggest what to do instead.#2022-02-1806:48onetomi figured using false for the value of val would work, except for :db.type/bool attributes:
{:query '{:find [?e]
:in [$ ?a ?v]
:where [(or (and [(= false ?v)] [?e ?a ?v])
(and [(not= false ?v)] [?e ?a _]))]}
:args [db-val :attr val]}
or some other sentinel value, eg ::any or the a symbol any`, but not sure what's a good sentinel value.
i have the feeling that this should be doable in a lot simpler way...#2022-02-1809:16kipzThe error looks like it's related to the inputs - it can't find the database. Have you tried just :in $ ?a ?v instead?#2022-02-1809:24kipzBut then - I tend to use the list form. This works for me#2022-02-1809:24kipz[:find ?e
:in $ ?a ?v
:where
(or
(and [(= false ?v)]
[?e ?a ?v])
(and [(not= false ?v)]
[?e ?a _]))]#2022-02-1813:54Søren SjørupThanks for posting this, I ran into the same problem with nil inputs today. I wonder what the reason to not allow it could be.#2022-02-1815:24Joe LaneAn alternative approach here is to dynamically construct the query. cond-> is your friend here.#2022-02-1818:06onetom@U0CJ19XAM yeah, i used this article, which used cond->, as a basis for my experiments: https://grishaev.me/en/datomic-query/ but in my case, i not just accreting where clauses, but have an either-or situation.#2022-02-1818:08onetomanyway, i've ended up branching outside of the query, since both query variants are trivially short.
(defn pull-by-attr
"Pulls all entities, using the `selector`, which have an attribute described by
`attr-ref`, optionally being equal to `attr-val`.
Number of results are not limited by default (:limit -1).
:selector / selector the selector expression
:a / attr-ref attribute :db/id or :db/ident or even a lookup-ref
:v / attr-val optional value
:debug? just return the query map, without executing it
For a complete description of the selector syntax, see
.
Returns a sequence of maps.
The arity-2 version takes :selector, :a and :v in arg-map, which
also supports :timeout, :limit and :offset. See namespace doc."
([db arg-map]
(let [{:keys [selector a v]} arg-map]
(pull-by-attr db selector a v (dissoc arg-map :selector :a :v))))
([db selector attr-ref]
(pull-by-attr db selector attr-ref nil))
([db selector attr-ref attr-val-or-opts]
(if (map? attr-val-or-opts) ;; an attr-val can't be a map ever
(pull-by-attr db selector attr-ref nil attr-val-or-opts)
(pull-by-attr db selector attr-ref attr-val-or-opts nil)))
([db selector attr-ref attr-val {:as query-opts :keys [debug? timeout limit offset]}]
((if debug?
identity
(comp (partial map first) d/qseq))
(-> (if (nil? attr-val)
{:query '[:find (pull ?e attrs) :in $ attrs ?a :where [?e ?a _]]
:args [db selector attr-ref]}
{:query '[:find (pull ?e attrs) :in $ attrs ?a ?v :where [?e ?a ?v]]
:args [db selector attr-ref attr-val]})
(assoc :limit -1)
(merge (dissoc query-opts :debug?))))))#2022-02-1818:09onetomi borrowed the docstring from the docstring of d/pull#2022-02-1818:16onetomand here are some invocation examples:
(pull-by-attr db {:selector '[*]
:a :some/attr
:v "val"
:debug? true
:limit 5})
(pull-by-attr db '[*] :some/attr)
(pull-by-attr db '[*] :some/attr {:debug? true})
(pull-by-attr db '[*] :some/attr "val")
(pull-by-attr db '[*] :some/attr "val" {:limit 5})#2022-02-1818:19onetomand these are equivalent (i think):
(pull-by-attr db '[*] :some/attr nil)
(pull-by-attr db '[*] :some/attr {:limit 5})
(pull-by-attr db '[*] :some/attr nil {:limit 5})#2022-02-1903:33onetomthis could be implemented with d/index-pull too, but that would also return entities, where (< (:some/attr entity) "val"), so we would need to limit the results outside of the query engine (eg (take-while #(-> % :some/attr (= "val")))), so it's probably doing some extra work (because of chunking)#2022-02-2109:24Ben Hammondwhen I use a datomic pull with a :limit
https://docs.datomic.com/cloud/query/query-pull.html#limit-option
do the results get returned in a stable order?
Ideally I would like to limit my results to the 10 most recent entities:
Can I express that just with a pull spec?#2022-02-2110:21kipzThe order does seem to be stable - I expect it reflects how the query is executing over the indexes. As far as paging and sorting results is concerned, that isn't supported as far as I know. We use index-pull to query specific indexes (in your case a date), then filter the results, then pull more until we reach our limit. It's not pretty, but it's what we have right now. In some other cases, we maintain the equivalent of "views" - such as a reference to the "entity with latest X where X.attribute = blah". We know we want to track this at transaction time, so it's no big deal. It's just a shame that we have to sully our otherwise fully normalised data model. For me, this is the most frustrating area of datomic because it's such a common use case for us. There was some talk of this here: https://forum.datomic.com/t/idiomatic-pagination-using-latest-features/1454/5 but it's gone quiet{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2022-02-2111:37Ben HammondI'm hoping that I can order the entities by Entitiy Id
Because Entity Id is monotonically increasing over time#2022-02-2111:38Ben HammondIs that true on datomic cloud?#2022-02-2114:43souenzzo@U793EL04V maybe you can use https://docs.datomic.com/cloud/query/query-index-pull.html{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-02-2114:45Ben Hammondoooh may be I can...
thanks for the hint#2022-02-2120:20xcenoI'm wondering about a good way for storing the following:
{:rule/id 1234567
:rule/rule-name "Some rule name"
:rule/predicate-fn `when
:rule/predicate-test-fn `entity-equals
:rule/predicate-test-fn-args {:entity [:some.thing/id #uuid"cfea961d-e5cf-4d38-9633-2eb22ba09e5a"]}
:rule/action `add-properties!
:rule/action-args {:properties {:some-kw 123}}}
I've built a little system that takes a bunch of these maps and assembles some clojure code from them.
I'm expecting to have a couple hundred of these maps at max.
Now, I'd like to store these maps somewhere. The first idea was to put them into datomic as a string and read them back in later, but searching through slack revealed https://clojurians.slack.com/archives/C03RZMDSH/p1613255057226100?thread_ts=1613250450.201700&cid=C03RZMDSH(?).
So, would I be better off storing them as .edn files in S3?
I don't want or need to pull them apart into separate datoms, nor do I need to search their content.#2022-02-2120:57Joe LaneHey @U012ADU90SW, I would consider this scenario to be a bit different than Gene Kim's.
Do you expect all maps to be within an order of magnitude of the size you showed above?
Ironically, you're extremely close to having the datomic schema for your rule entity. Is there some reason you don't want to just make them regular entities?#2022-02-2121:06xceno> Do you expect all maps to be within an order of magnitude of the size you showed above?
yeah they shouldn't grow much bigger than that
> [...] Is there some reason you don't want to just make them regular entities?
Well, I guess that's true!
I'm mostly worrying about the :rule/....-args fields which could contain anything.
So when saving them as regular entities i'd need to convert these to strings anyway, that's why I figured I could as well just serialize the whole thing.
In my mind having a rule split into EAV parts doesn't make much sense because I always need all fields anyway. And I wouldn't "clutter" datomic with these attributes, but maybe that thought is misguided#2022-02-2212:57Ivan FedorovHow does one store vectors properties on Datomic?
having a structure like
{:command/query "[:find …]"
:command/title "something"}
Thinking if it’s a good idea to store query and its params as a string#2022-02-2215:28kipzWe do something like this, and we use pr-str/read-string and use the string type. Works for us.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2022-02-2213:29pkovastoring the query as a string is a bad idea, better to quote the vector {:command/query '[:find ...] :command/title "something"}#2022-02-2214:35Ivan Fedorovwhat value type would that be?#2022-02-2313:48manutter51How can I get datomic to tell me all the transactions that were transacted between start-time and end-time? I’ve gotten as far as
[:find ?e ?a ?v ?op
:in $ ?log ?start ?end
:where
[(tx-ids ?log ?start ?end) [?tx ...]]
[(tx-data ?log ?tx) [[?e ?a ?v _ ?op]]]]
, but that just gives me
#{[13194272974810 50 #inst "2022-02-22T14:38:18.227-00:00" true]}
How do I get to the actual value that was changed?#2022-02-2313:56favilaThat is the value that was changed? #2022-02-2313:57favilaThis appears to be just one empty transaction. Try different start and end parameters to get others#2022-02-2313:57manutter51What does 50 mean in that response?#2022-02-2313:57favilaThat’s an attribute#2022-02-2313:58favilaLikely :db/txInstant#2022-02-2313:58manutter51Ah, hmm#2022-02-2313:59favilaConsider using d/tx-range also#2022-02-2313:59favilaIf all you really want is the full log it will be faster and offers laziness#2022-02-2313:59manutter51Oh snap, that looks like exactly what I’m after, thanks a ton!#2022-02-2314:05manutter51Yeah, I can see everything I’m looking for now, thanks again!#2022-02-2320:24johanatanhi, technically i'm trying the following in datascript, but if there were some fundamental problem with my syntax, I'd hope this channel could spot it.
why isn't the keyword mentioned in my predicate being filtered out?
core=> (cljs.pprint/pprint (d/q `[:find [?k ...] :where [?e :root/user ?r][?r ?k ?v][('not= ?k :user/feature-flags)]] @connection))
[:user/id
:user/disable-global-role?
:user/has-to-accept-license?
:user/first-name
:user/last-name
:user/feature-flags
:user/role
:user/company-id
:user/avatar-url
:user/app-roles
:user/email]
nil
if I try the complement, I get the same result list:
core=> (cljs.pprint/pprint (d/q `[:find [?k ...] :where [?e :root/user ?r][?r ?k ?v][('= ?k :user/feature-flags)]] @connection))
[:user/id
:user/disable-global-role?
:user/has-to-accept-license?
:user/first-name
:user/last-name
:user/feature-flags
:user/role
:user/company-id
:user/avatar-url
:user/app-roles
:user/email]
nil
is this a problem with datascript or my use of it?
(note that if I use = or not= rather than the quoted alternates, it complains that the built-in predicate doesn't exist. so i know it's at least doing some analysis of these predicates-- apparently just not actually running them).#2022-02-2320:27favilaIn datomic, [?r ?k ?v] would bind a number (the attribute entity id) to ?k#2022-02-2320:28johanatanthat would be ?r in my testing with datascript#2022-02-2320:28johanatanbut we can see the list of ?k's returned in the result list here and they are keywords{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-02-2320:29johanatane.g., the following has attribute keys in the "middle" position:
https://github.com/tonsky/datascript/blob/cd129b7a9ef2fc31fa95873c376691f272752832/test/datascript/test/query_fns.cljc#L131#2022-02-2320:30favilaCould it be some edge case with qualified symbols and predicate clauses?#2022-02-2320:31favilatry this:#2022-02-2320:31favila(d/q '[:find [?k ...] :where [?e :root/user ?r][?r ?k ?v][(not= ?k :user/feature-flags)]] @connection)
#2022-02-2320:31favilanote the quoting changed#2022-02-2320:32favila[?k]` would expand to [current-ns/?k]#2022-02-2320:32favilamaybe that ?k isn’t unifying with the one inside the (not= ?k ...) for whatever reason#2022-02-2320:33johanatanYea I tried ?k# too to remove the ns#2022-02-2320:34johanatanOk I'll try outer single quote #2022-02-2320:36johanatanoooh, the combination of outer single quote and no quote on the pred fn worked#2022-02-2320:36johanatanweird#2022-02-2320:37johanatancore=> (cljs.pprint/pprint (d/q '[:find [?k ...] :where [?e :root/user ?r][?r ?k ?v][(not= ?k :user/feature-flags)]] @connection))
[:user/id
:user/disable-global-role?
:user/has-to-accept-license?
:user/first-name
:user/last-name
:user/role
:user/company-id
:user/avatar-url
:user/app-roles
:user/email]
nil
core=> (cljs.pprint/pprint (d/q '[:find [?k ...] :where [?e :root/user ?r][?r ?k ?v][(= ?k :user/feature-flags)]] @connection))
[:user/feature-flags]
nil
#2022-02-2320:37johanatanthanks!#2022-02-2320:38johanataninterestingly i have the following elsewhere in my codebase for query generation and it works (notice the backtick):
(defn query-root-scalar [key]
`[:find ?v# . :where [~root-id ~(keyword "root" (name key)) ?v#]])
#2022-02-2320:41favilaoh, it’s because 'not= expands to the quoted form#2022-02-2320:42favilayou needed ~'not=#2022-02-2320:42johanatandoh! yep. good call#2022-02-2320:42johanatanso will it hurt anything that all of my ?s will be "namespace-bound"#2022-02-2320:42johanatanI was trying to get fancy by using # suffix to make them unique#2022-02-2320:43favilaso it was invoking ('not= x :something) which is symbol-lookup of not= in collection x, or :something as not found, so it was always returning :something thus everything matched.#2022-02-2320:44johanatanyea but here's the weird part:
https://github.com/tonsky/datascript/blob/0.11.6/src/datascript/query.cljc#L140#2022-02-2320:44johanatanI was getting "Unknown predicate error ..." when the predicate wasn't found in bulit-ins linked to above#2022-02-2320:44favilare “weird part”, yeah but it’s actually (quote not=) which isn’t in that list#2022-02-2320:45johanatanright which should produce an "Unknown predicate error ..." 🙂#2022-02-2320:45favilaor, it will just eval to itself?#2022-02-2320:46johanatanmm, not sure. my brain is getting twisted at these meta levels. something weird was going on for sure.#2022-02-2320:47favila> so will it hurt anything that all of my ?s will be “namespace-bound”
It’s just weird, you may shake out edge cases#2022-02-2320:47favilaIt defies the usual assumptions#2022-02-2320:48johanatanyea but that seems to be sort of the SOP in these datascripts#2022-02-2320:48johanatanperhaps going with the flow is called for 🙂#2022-02-2320:48favilahttps://github.com/brandonbloom/backtick may make it easier to mix quasi-quoting and unqualified symbols{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-02-2322:52johanatanhi, is it possible to do a pull expression with a wildcard / except certain fields?
something like the following:
[:find [(pull ?e [* !:this/field]) ...
#2022-02-2323:16favilano#2022-02-2323:16johanatanhow about the ability to apply a "result transform" to the result of a :find ?#2022-02-2323:17johanatansuch as the: into {} portion of the following:
core=> (cljs.pprint/pprint (into {} (d/q '[:find ?k ?v :where [?e :root/user ?r][?r ?k ?v][(not= ?k :user/feature-flags)]] @connection)))
#2022-02-2323:17johanatanthe library i'm using won't allow the "outside wrapper" transform such as above, so i'd need a way to get it inside the query itself#2022-02-2323:27favila(map #(update % 0 dissoc the-field)) ?#2022-02-2323:35johanatanwhere would that fit into the overall [:find ... } ?#2022-02-2323:36johanatanactually i don't think that would work. update and dissoc require a map which in my example doesn't exist until into {} is executed#2022-02-2323:38johanatanlook at the reg- functions below to understand why any "outer wrapper" solution won't work:
https://github.com/denistakeda/re-posh/blob/master/src/re_posh/subs.cljc
of course, it could be done with two subs, one taking the input from the first and then transforming it. but imagine that we want to do this for 100s of subscriptions and suddently 250 subs become 500 subs (which is overly verbose)#2022-02-2400:00favilaUpdate doesn’t require a map#2022-02-2400:01favilaIs there a reason you don’t enumerate the fields you need?#2022-02-2400:08johanatanyea, verbosity. it's N-1 fields that I need#2022-02-2400:08johanatanand N is on the larger side#2022-02-2400:09johanatanbut i think i will just restructure the facts around this data, break it out to the top level#2022-02-2400:09johanatanupon insertion#2022-02-2400:09johanatanwhich will be fine for this case. but it's unfortunate that there isn't a better solution for this#2022-02-2400:09johanatanin the general case#2022-02-2422:13hkrishHello Datomic Cloud experts,
In dev environment, I am developing a Clojurescript/re-frame app, with a Datomic Cloud backend. The SPA access the local Jetty server, and the server code uses client-api to access Datomic Cloud, using Gateway (...
)
Two issues, I face all the time.
1. Startup time. Even from the REPL, the queries take a long time to get a response, first time. If I send consecutively other queries, the response is just fine. Sometimes, I even get Datomic Timeout error.
2. Probably because of the same issue, but I put it as a separate issue, we are not able to test the app with the CLJS browser app.
What are the possible solutions available? Thanks for the suggestions.
Also is there any change in the dev process needed? The CLJS app requires constant interaction with the server code and Datomic db. The CLJS is also under development. Can we use dev-local within a sever? The challenge I have then is with the REPL. The nREPL or Jetty put a lock on the data directory.
Update: Now I see a transaction is failing due to CORS, inside the server. The log file shows this:
[qtp670918934-35] INFO io.pedestal.http.cors - {:msg "cors response processing", :cors-headers {"Access-Control-Allow-Origin" "http://localhost:9875", "Access-Control-Allow-Credentials" "true", "Access-Control-Expose-Headers" "Strict-Transport-Security, X-Frame-Options, X-Content-Type-Options, X-Xss-Protection, X-Download-Options, X-Permitted-Cross-Domain-Policies, Content-Security-Policy"},
Many thanks in advance!#2022-02-2509:27xcenoIf you don't have a specific reason for using a real datomic cloud instance for development, you should probably use dev-local. It's meant to be used for exactly this scenario.
In my app I'm using several config files like config/dev.edn and config/prod.edn. Dev is configured to use dev-local and once deployed, the app uses the prod config to access datomic cloud.
I'm not familiar with pedestal though, so I can't help you with that#2022-02-2509:31hkrishThank you. How do you run a local server and nREPL with dev-local?#2022-02-2509:37xcenoyou run everything like you normally would and then just construct a datomic cloud but with dev-local config, like described here: https://docs.datomic.com/cloud/dev-local.html#using
That's the critical piece:
(require '[datomic.client.api :as d])
(def client (d/client {:server-type :dev-local
:system "dev"}))#2022-02-2520:36hkrishThanks again. I just tried this. Looks great. Thanks for the suggestion. But I have another issue now. I use Cursive with IntelliJ, and access a nREPL server. But both a jetty server and an nREPL wouldn't run with dev-local. I haven't been able to figure it out. Any hint is welcome.
Thanks again.#2022-03-0101:51jacob.maine@U9RU257K3 it sounds like you’re running into problems constructing a classpath that includes the repl, jetty and dev-local. Classpath problems are very hard to debug without access to the code.
My speculation is that when you start the nREPL, you aren’t using the deps.edn aliases that would pull in all 3 of those dependencies. I recommend that you reach out to the Cursive community (I’m not a Cursive user) to help you configure Cursive to start the repl correctly.#2022-03-0201:46hkrishThank you Jacob. I have included the dependencies as you suggested. Still I get this.
Execution error (IOException) at http://datomic.core.io.file-channels/lock! (file_channels.clj:35).
File <path to the storage-dir.............>/.lock is in use by another process.
Like you suggested, it could be a classpath issue. Not able to figure it out. Will try to reach out to Cursive group support. Thanks for response.
In the mean time, back to the client api with the cold start api issue!#2022-03-1007:46jacob.maine@U9RU257K3 I’ve never seen that error. Do you have any other processes connected to the same dev-local :system? If so, stop them and try again. If not, you may want to try deleting that .lock file.#2022-02-2510:31Andrej GolcovHi,
One more question, related to my previous question. We have parent-child relations in datomic, something like:
[
{ :rootId "1" :childIds [{:childType "a" :childId "2"} {:childType "b" :childId "3"}]}
{ :rootId "2" :childIds [{:childType "b" :childId "4"}}
]
IOW, root entity references child enitity. Child entity has a type ("a", "b" in the example above)
Q: how I can query all root entities with child entities of specific type (e.g. "a") or nil/defaultValue if there is not such childType (kind of outer join).
For example above, I need the following rows:
rootId - childIdOfTypeA
"1" - "a"
"2" - nil/orSomeDefaultValue
thanks#2022-02-2516:16zendevil.ethwhen I’m fetching a datomic entity, is there a way to get the timestamp of the transaction too?#2022-02-2516:17ennwhich transaction?#2022-02-2516:22zendevil.eth(def first-movies [{:movie/title "The Goonies"
:movie/genre "action/adventure"
:movie/release-year 1985}
{:movie/title "Commando"
:movie/genre "action/adventure"
:movie/release-year 1985}
{:movie/title "Repo Man"
:movie/genre "punk dystopia"
:movie/release-year 1984}])
(d/transact conn {:tx-data first-movies})
I can get entities:
(def all-movies-q '[:find ?e
:where [?e :movie/title]])
(d/q all-movies-q db)
gives:
[[17592186045418] [17592186045419] [17592186045420]]
But what about the time when the entity was created?#2022-02-2516:24Lennart Buitdid you find: https://github.com/Datomic/day-of-datomic/blob/master/tutorial/time-rules.clj#2022-02-2516:25Lennart Buitalthough, getting stuff from the history is slower than from the present#2022-02-2516:28favila> But what about the time when the entity was created?#2022-02-2516:29favila“creation” has an application-defined meaning#2022-02-2516:31favilaYou could ask for the time an entity id was minted (i.e. a tempid in tx-data was replaced with a new entity id), but that’s likely not meaningful to the application except with a pile of assumptions. You can do this because the “T” part of the entity id interleaves with the T of the transaction entity#2022-02-2516:32favila(although I’m not sure this is true in cloud--I’ve heard rumors it isn’t)#2022-02-2516:35Lennart Buit(This is the concept being referred to: https://vvvvalvalval.github.io/posts/2017-07-08-Datomic-this-is-not-the-history-youre-looking-for.html, you shouldn’t use the history for your applications core functionality. You could use it for auditing purposes)#2022-02-2516:35zendevil.ethjust the time of the entity tx, is there a way to get it by default? If not what’s the idiomatic way to store the entity time?#2022-02-2516:36zendevil.ethdoes :db/txInstant exist by default?#2022-02-2516:36favila> If not what’s the idiomatic way to store the entity time?#2022-02-2516:36favilamake an attribute, and write the appropriate time#2022-02-2516:37favila> does :db/txInstant exist by default#2022-02-2516:37favilait always exists, but only on the transaction entity#2022-02-2715:10souenzzo@U01F1TM2FD5 you have one "modified date" for each attribute/value in the entity#2022-02-2715:12zendevil.ethSure#2022-02-2516:41zendevil.ethIs there a way to ensure that when I’m transacting an entity, that certain attributes exist in the tx and others don’t, or is it something that should be checked in the application level?#2022-02-2516:41Lennart Buityou can assert that attributes are present#2022-02-2516:42Lennart Buithttps://docs.datomic.com/on-prem/schema/schema.html#entity-specs#2022-02-2516:43Lennart Buitbut; I believe that to only be positive, so only that an entity has certain attributes; not that it cannot have others#2022-02-2516:43Lennart Buit(like how maps are ‘open’ in clojure as well, intentionally)#2022-02-2516:46favilaThe entity predicate can do anything it wants, including check that an attribute is not asserted.#2022-02-2516:47favilabut without writing code, entity specs can only require that an attribute is asserted#2022-02-2516:47Lennart BuitI stand corrected, I was talking about the required attribute feature#2022-02-2518:37Lone RangerI have a really old .h2 datomic database file that I need to recover some data from. We have a pro on-prem license. Is there any way I can access that without having to build a special purpose project with datomic free?#2022-02-2518:58Lone RangerJust to close the loop:
I used this docker: https://github.com/alexanderkiel/datomic-free and attached the data directory as a volume, then I was able to use
(d/get-database-names "datomic:")
to get the name of the databases and recover the data#2022-02-2521:09JohnJyou can use the dev mode storage in datomic pro#2022-03-0122:03jdkealyHow can I allow a datomic query to return nil ?
(d/q
'[:find ?sometimes-this ?sometimes-that ;; sometimes both!
:in $ ?user
:where
[?user :user/likes ?sometimes-this]
[?user :user/dislikes ?sometimes-that]
] db-conn (:db/id user) )
this query would return nil if a user had never likes nor disliked anything#2022-03-0122:14Joe Lane(d/pull (d/db conn) '[:user/likes :user/dislikes] (:db/id user))
#2022-03-0122:14Joe Lane^^ Will return an empty map if neither#2022-03-0122:29jdkealycool that looks interesting#2022-03-0122:29jdkealywhat about if the query is more advanced ?#2022-03-0122:32jdkealyI made a trivial example, but my example is actually like ...
(d/q
'[:find ?presenter ?co-presenter ;; sometimes both!
:in $ ?user
:where
[?user :user/classes ?class]
[?class :class/presenter ?presenter]
[?class :class/co_presenter ?co-presenter]
] db-conn (:db/id user) )#2022-03-2415:18jacob.maineThat example can also be achieved with pull, either:
(d/pull db '[:user/classes [:class/presenter :class/co_presenter]]
(:db/id user))
or:
(d/q '[:find (pull ?user [:user/classes [:class/presenter :class/co_presenter]])
:in $ ?user]
db (:db/id user))#2022-03-0212:04AlexandraHii everyone! I am a clojure beginner and I am a little stuck setting dev-local to develop and test Datomic Cloud, I get this error
; Execution error (ExceptionInfo) at datomic.core.anomalies/throw-if-anom (anomalies.clj:94).
; You must specify an absolute path or the keyword :mem under :storage-dir in a map in
your ~/.datomic/dev-local.edn file, or in your call to client
when I execute this line
(dl/divert-system {:system "production"})
I already create ~/.datomic/dev-local.edn file with an absolute path
{:storage-dir "home/User/folder-1/folder-2/data"}
I am following this documentation https://docs.datomic.com/cloud/dev-local.html#divert-system if someone could give me some clue it would be wonderful Thank you for your time#2022-03-0212:46Robert A. RandolphAn absolute path should start with a leading slash.
Have you checked that path in your shell by trying to cd to it from some non-root directory?#2022-03-0213:13Alexandra😰 Thank u :,) that was the problem, I spent like 1 hour looking the error. Thank u very much Robert#2022-03-0320:38Michael Stokleydo datomic rules have any concept of namespaces?#2022-03-0320:39Michael Stokleyor is it best to put any and all context into the rule name itself#2022-03-0322:45jarrodctaylorWhat problem are you trying to solve?#2022-03-0421:44Michael Stokleyi have a namespace named $service.$module.$entity. in it i have a rule that repeats much of the context listed in the namespace name - for example, the entity.
if this were a clojure var, i could omit this.#2022-03-0421:47Michael Stokleyi just thought that since clojure and spec make heavy use of namespaces, there might be something like that in datomic, and that i had missed it.#2022-03-0322:37Michael Stokleyi suppose you can give them a namespace in as much as the rule names are symbols, and symbols can have namespaces#2022-03-0713:59Leaf GarlandIn this https://clojurians.slack.com/archives/C03RZMDSH/p1618440859298300 from last year, there is an example pull call like so:
(d/pull (d/db conn) '[:user/type] [:user/id "a"])
=> #:user{:type #:db{:id 87960930222153, :ident :user.type/a}}
The returned value for :user/type has the :db/id and the :db/ident.
When I run a similar pull, I only get the :db/id. If I want the :db/ident value then I have to put it in the pull pattern explicitly. My schema is very similar to the enumerations example in datomic docs.
Any ideas why I don't see the :db/ident key/value in my returned map?#2022-03-0714:20favilaPRobably in this example :user/type was a :db/isComponent true#2022-03-0714:21favilawhen you don’t specify what to retrieve from a ref attribute (i.e. you did [:user/type] not [{:user/type [:my/attr]}], what happens depends on the isComponent-ness of the attribute#2022-03-0714:22favilaif not isComponent, you will only get :db/id#2022-03-0714:22favilaif isComponent, it’s the same as *#2022-03-0714:22favilamy recommendation: always specify what you want from your refs#2022-03-0720:02Leaf GarlandThanks. Agreed about specifying the attributes. I was curious about the difference in results. #2022-03-0809:34Kris CHow do you use :xform if the pull exp is like this [{:user/type [:my/attr]}]?#2022-03-0813:08Kris C@U09R86PA4 any hint on this ^^?#2022-03-0716:16Joshua SuskaloWith datomic on-prem, what is the return type of custom aggregation functions? The one example that I have found is in the docs and returns a single value for making an aggregate that returns only 1 item, but how does this work when the aggregation function wants to return N items? And if the answer is simply to return a sequence, how does datomic keep track of the order of the items? I would assume it can't be metadata since not all types that datomic can store can have metadata.#2022-03-0716:27favila> And if the answer is simply to return a sequence, how does datomic keep track of the order of the items?#2022-03-0716:27favilaNot sure what you mean here: results are unordered anyway#2022-03-0716:29Joshua Suskaloresults may be unordered, but the results from an aggregation may be tied to other values, in e.g. this query:
[:find ?title (max n ?year)
:in $ n
:where [?entity :movie/year ?year]
[?entity :movie/title ?title]]#2022-03-0716:30favilamax is already getting a group of years which share a ?title#2022-03-0716:31Joshua SuskaloOh, then maybe I have misunderstood how exactly this works. My understanding was that it would attempt to take the top n title/year pairs based on the year.#2022-03-0716:31favilanope#2022-03-0716:31Joshua SuskaloHmm. Is there a way to do this?#2022-03-0716:32favilatypically you’d return all results and then sort#2022-03-0716:33favilaif that’s not an option, a custom aggregate which accepts a tuple of all results and does the sort and top-n inside of it#2022-03-0716:33favilaor redesign the schema to expose what you want to sort by as an index#2022-03-0716:33favila(composite tuple)#2022-03-0716:34favilaif you’re on on-prem, there is no overhead to sorting all results outside the query--that data was literally all just fetched in your peer anyway, and the full result set already made, so nothing is saved by pushing that work into the query#2022-03-0716:34favilafor client api, sometimes that result went over a wire, so yeah, it would be nice to reduce the result size before returning it#2022-03-0716:40Joshua Suskaloright, that makes sense.#2022-03-0716:40Joshua SuskaloThanks!#2022-03-0720:02Ben HammondHi. Is it possible to specify a txInstant value directly within a pullspec?
I know that my entity was transacted in a oner
(but I suppose datomic cannot assume that...)#2022-03-0720:05kennyNo. You can add a ref to the tx entity on your domain entity though. #2022-03-0720:06Ben Hammondoh? How would I do that?#2022-03-0720:09Ben HammondI mean; can I do that within the single call to (d/transact ?#2022-03-0720:10Ben Hammondor do I have to make a second transaction to poke the tx ref into the domain entity?#2022-03-0720:10favila"datomic.tx" is the tempid of the current transaction, so yes, like this [:db/add my-entity :ref-to-tx "datomic.tx"]{:tag :div, :attrs {:class "message-reaction", :title "kissing"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😗")} " 1")}
#2022-03-0720:11Ben Hammondperfect.
thankyou#2022-03-0720:11favilayou can also set db/txInstant (in some circumstances) the same way: https://docs.datomic.com/cloud/transactions/transaction-processing.html#explicit-txinstant#2022-03-0720:11Ben Hammondwell. I'm too lazy to put an explicit timestamp into my domain entity#2022-03-0720:12kennyTread carefully. I feel like most uses of the tx entity for domain info are better suited to be modeled with attributes under your control. #2022-03-0720:12Ben Hammondso I am hoping to get a timestamp for free#2022-03-0720:12Ben Hammondyeah you are prolly right; I might live to regret this mightnt I{:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 1")}
#2022-03-0720:12favilayeah, that’s iffy#2022-03-0720:12Ben Hammondok#2022-03-0720:12Ben Hammondthanks#2022-03-0811:19Kris CHow do you use :xform on a map specification [{:user/type [:db/ident]}]? I want to :xform the resulting {:db/ident :something}#2022-03-0813:20favila[{(:user/type :xform fn) [:db/ident]}]{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 1")}
#2022-03-0813:21favilaI don’t know how “undefined behavior” this is though#2022-03-0813:21favilabut it does work#2022-03-0813:22Kris Chuh, how did you figure this one out?!#2022-03-0813:22favilait fits the grammar#2022-03-0813:22Kris C👍#2022-03-0813:22Kris Cthanks!#2022-03-0813:22favilait’s an attribute spec where a bare attribute would be#2022-03-0813:22favilathat’s the pattern#2022-03-0813:36Kris CThanks again, @U09R86PA4, you're the man! 😉#2022-03-0815:53Shuky BadeerHi guys! I'm using presto SQL CLI to query Datomic from the terminal using sql.
Now the output here makes some sense. One thing i noticed here is that it appends "v1/statement" to the original server url (as shows in the first line of the error). I'm not sure what I'm doing wrong but would appreciate some feedback.#2022-03-0816:01Joe Lane@U033V0AJFU4 Can you show you're client config in your etc/catalog/<catalog>.properties?#2022-03-0816:11Shuky BadeerHi @U0CJ19XAM I suppose I'll need to ssh into the Datomic EC2 server for that. Since I currently don't have the private pem key I'll get back to you on that tomorrow?#2022-03-0816:12Joe LaneWhy would you need to do that? datomic analytics can be run from your local laptop and connect to your cloud nodes.#2022-03-0816:15Shuky BadeerOh ok sorry i'm new to the clojure/datomic world.
So in my local db there's no catalog folder in the etc folder#2022-03-0816:16Shuky BadeerThat's why i thought u were talking about the ec2 server#2022-03-0816:17Shuky Badeer@U0CJ19XAM etc/catalog/ is a folder that was supposed to be set up?#2022-03-0816:18Joe Lanehttps://docs.datomic.com/cloud/analytics/analytics-configuring.html#2022-03-0816:19Joe LaneCheck out that configuration page, it should have the information you need. And yes, the catalog folder and it's contents need to be set up when you configure your trino cluster.#2022-03-0816:42Shuky BadeerRoger that! Will get going on it in the morning and will let u know what happened.
Thanks a lot @U0CJ19XAM much appreciated!{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-03-1112:03Shuky BadeerHello from the future, anyone reading this thread, turns out Windows has WSL (Windows subsystems for Linux) which allows you to run Ubuntu on Windows! Incredibly amazing!
After setting that up, and following the tips from @U0CJ19XAM I was able to run Presto on the Ubuntu subsystem and querying Datomic using SQL - Pretty cool!
Big thank you @U0CJ19XAM for the support!{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-03-0819:15wilkerluciohello everyone, how can I find all transactions that happen to a specific entity on datomic?#2022-03-0819:21favila(into #{} (map :tx) (d/datoms (d/history db) :eavt specific-entity)) possibly#2022-03-0821:38zendevil.ethI have a function like so:
(defn by-id
"take an id, return the instance of it"
[id & [_db]]
(let [_db (or _db (_d))]
(try
(d/entity _db id)
(catch Exception _ nil))))
it returns the entity if I give it an id: (by-id 12123413535) => #:db{:id 12123413535 }
But it returns this for every id, even the ones that were retracted. How do I identify if an id was retracted?#2022-03-0821:41favila“id was retracted” — you need to narrow down what this means for your application#2022-03-0821:43faviladatomic does not come with a notion of existence, that’s a domain concept. It only models assertions and retractions, and entity maps are a projection of those into map form by filtering+joining all datoms which share the same E value.#2022-03-0821:45favilaif you have a unique public id you put on your entities, and the presence of that assertion means the entity “exists” in your domain model (very common), you can test for the presence or absence of that assertion and filter; or you can use it directly as a lookup ref [:my-id 123] and the lookup will simply not resolve to an entity id if it isn’t there.#2022-03-0914:45emccueI know with xtdb you have to explicitly await a transaction in order for you to be able to read your writes.#2022-03-0914:47emccueBut now i'm uncertain about how you are able to read your writes in Datomic - getting "ok" back from the transactor i don't think implies that your next read from storage will have the transaction present#2022-03-0914:47emccueor if it does, i don't understand how#2022-03-0914:48Joe LaneThe result of a transaction returns the :db-after which has processed your transaction.#2022-03-0914:48emccuefrom an api standpoint, yes that makes sense#2022-03-0914:48emccuebut when you actually go to perform a query on db-after, how do you know that the node you talk to will have the data?#2022-03-0914:49Joe Lanehttps://blog.datomic.com/2013/06/sync.html#2022-03-0914:50Joe LaneThere is a separate api function d/sync for cross-node operations#2022-03-0914:50emccuereading now#2022-03-0914:50emccue> Thus, contrary to common presumption
but yeesh#2022-03-0914:53emccueI think I need to invent some words to properly convey why i hate this writing style#2022-03-1217:09krukow👋 I’m new to logic programming and am looking solve a constraint problem that is think is best modeled as a finite domain:
• Given a set of mentors and mentees
• Each mentee can have a list of mentor preferences
• I want to find all solutions to matching mentees with their mentor preferences.
• Each mentor can only be a mentor for one mentee
I first https://clojurians.slack.com/archives/C0566T2QY/p1647097346744699 with https://gist.github.com/krukow/9ab5a708458e4d3d758032b081a5707a: My thinking was to look at this as a finite domain (translating names to numbers) and using a lvarfor each mentor.
By using (fd/distinct (vec lmentors)) I was hoping to solve for “each mentor only gets to have a single mentee”.
Problem: As the number of lvars becomes large (say ~40) then running (fd/distinct (vec lmentors)) in core.logic very slow. If I remove this constraint the program finds solutions immediately.
Question: I’ve tried to model this in datomic too (local dev). However, I find myself unable to write a query that (a) finds all solutions (a seq of “valid mentee-mentor matchings”); and (b) modeling the solution constraint that each mentor may only appear once in the entire solution set (i.e. something similar to fd/distinct). Is it possible to ask datomic to solve this constraint problem or should I be looking to something else?
Data setup: https://gist.github.com/krukow/070c51827fd3da25aa24a01ee7045011#2022-03-1220:35Drew Verlee@U606MT4CX in a typical sql database we call that a "one to many relationship"
You validate it at write time when you don't break that invariant in datomic, in sql you would have to first create the table that broke that constraint, then write it. So it would take two steps not one.#2022-03-1220:36Drew VerleeOr i'm confused. I didn't understand a good bit of your question.#2022-03-1309:17krukowThanks for the reply. I was hoping to use datomic to solve the constraint problem for me - but I don’t think it’s the right tool for the job. I think I need more general constraint solving. I found https://choco-solver.org/docs/ and it seems to do the trick 🙂#2022-03-1620:55m0smithOur security team is concerned about "dirty pipe" exploit. Do we have any guidance on which remediation to apply?#2022-03-1620:56ghadiDo you allow shell access to the Datomic nodes?#2022-03-1621:00m0smithWe don't (not that I know of) We are worried about the bastions in particular#2022-03-1621:02Joe LaneHi @m0smith, if the bastions are your primary concern you can upgrade your Datomic Cloud system to the latest release which doesn't have bastions anymore.#2022-03-1621:03m0smithWhat about the other EC2?#2022-03-1621:03m0smithThis is the CVE we are looking at https://securelist.com/cve-2022-0847-aka-dirty-pipe-vulnerability-in-linux-kernel/106088/#2022-03-1621:04Joe Lane@m0smith Why don't you open up a support ticket with us and forward your security team's concerns to us.#2022-03-1621:05m0smithWill do.#2022-03-2117:38Shuky BadeerHeyy guys! general question, suppose we have a movie "row" in datomic, we change it's name at some point, then we query that movie to get its details, does datomic return the latest version of it or ALL versions? Can we tell datalog whether to send all versions or not? Thanks a lot!#2022-03-2119:24camdezMost commonly you’ll be working with the latest version of an entity. If you specify an older state of the database (often done via a timestamp), then you can see the entity as it was. Alternately you can query the history of changes to attributes and assemble your own notion of the versions.#2022-03-2209:23Shuky Badeer@U0CV48L87 thank you! So, by default, a query in datomic returns the latest version of the datom? Right now i seem to be getting all versions of it so i wonder if i'm doing something wrong in adding datoms or whether i should adjust the query code#2022-03-2214:20jarrodctaylorIf you want to provide a link to example code demonstrating your efforts we can help you understand the responses you are seeing.#2022-03-2215:37jaretYeah, I think it's easiest to look at an example @U033V0AJFU4 so feel free to share. But in Datomic you pass a DB value to the query. You will get results in query from the DB value you are passing, in Datomic a https://docs.datomic.com/cloud/whatis/data-model.html#database is a set of datoms. A point-in-time immutable value that will never change. If you want to see all history we have the https://docs.datomic.com/client-api/datomic.client.api.html#var-historyhttps://docs.datomic.com/on-prem/clojure/index.html#datomic.api/history(see also: https://docs.datomic.com/cloud/tutorial/history.html#history-query). This is true for both on-prem and Cloud.#2022-03-2214:13timoHi, if Object Cache is not set, does this mean a peer might OOM when under load?#2022-03-2214:25thumbnailafaik Object Cache is automatically set to half of max heap size{:tag :div, :attrs {:class "message-reaction", :title "yes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "yes", :src "https://emoji.slack-edge.com/T03RZGPFR/yes/7db5d0ba8bc231d1.png"}, :content nil})} " 2")}
#2022-03-2313:56Björn EbbinghausCan anyone explain why there are differences in the :find specifications for on-prem and cloud?
Why doesn't cloud have find-coll [?e ...] or find-scalar ?e . ?
Is there something wrong with them? Are they discouraged even for on-prem? Is there a technical explanation?#2022-03-2314:22favilaThe only one I can think of is maybe too much confusion from people thinking the find-destructure either short-circuits or makes only-one guarantees (it doesn’t do either)#2022-03-2315:02dvingoI just submitted a support request. The Zendesk email replied with:
> In order to expedite the resolution of this support request, please provide the information described in this Knowledge Base article: https://cognitect.zendesk.com/entries/96723066-Information-to-provide-with-a-support-request.
That link 404s{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-03-2315:46jaretI'll fix that. Sorry!#2022-03-2317:01dvingocheers!#2022-03-2315:51zendevil.ethI have this query:
(defn sent
"get user sent threads"
[user & [{:keys [page]}]]
(let [page (or page 0)
db-conn (db/_d)
to-drop (* messages-per-page page)]
(->> (d/q
'[:find ?e (max ?createdAt)
:in $ ?user
:where
[?e :thread/users ?user]
[?m :message/thread ?e]
[?m :ent/created_at ?createdAt]
(not [?user :user/deleted_thread ?e])
[?user :user/read ?e]] db-conn
(:db/id user))
(sort-by #(inst-ms (last %)))
reverse
(drop to-drop)
(take messages-per-page)
(map #(db/by-id (first %) db-conn)))))
And it works:
(take 2 (map keys (sent {:db/id 17592186045460} {:page 0})))
;; => ((:ent/created_at :thread/users :thread/subject :user/read) (:ent/created_at :thread/users :thread/subject))
But I also want to get all the :message/thread matches, which will be an array, so I add an ?m in :find
(defn sent
"get user sent threads"
[user & [{:keys [page]}]]
(let [page (or page 0)
db-conn (db/_d)
to-drop (* messages-per-page page)]
(->> (d/q
'[:find ?e ?m (max ?createdAt)
:in $ ?user
:where
[?e :thread/users ?user]
[?m :message/thread ?e]
[?m :ent/created_at ?createdAt]
(not [?user :user/deleted_thread ?e])
[?user :user/read ?e]] db-conn
(:db/id user))
(sort-by #(inst-ms (last %)))
reverse
(drop to-drop)
(take messages-per-page)
(map #(db/by-id (first %) db-conn)))))
But evaling this still doesn’t show the :message/thread key:
(take 2 (map keys (sent {:db/id 17592186045460} {:page 0})))
;; => ((:ent/created_at :thread/users :thread/subject :user/read) (:ent/created_at :thread/users :thread/subject))
Why is it not shown?#2022-03-2315:57Björn EbbinghausIn sent you have: :find ?e (max ?createdAt) ?m
And then: (sort-by #(inst-ms (last %)))
"Last" is an entity id (long)#2022-03-2315:57zendevil.eth@U4VT24ZM3 I updated the question#2022-03-2316:06Björn EbbinghausOK...
So when you run a query. You get back a set of all matches. There are no "arrays" or "collections" in the query...
Instead of:
[entity-id [message1 message2 message3] created-at]
You get:
[entity-id message1 created-at]
[entity-id message2 created-at]
[entity-id message3 created-at]
#2022-03-2316:09Björn EbbinghausYou can use a pull in your find and pull the messages for a thread:
:find ?e (pull ?e [:message/_thread]) (max ?createdAt)
That would give you something like:
[entity-id {:message/_thread [message1 message2 message3]} created-at]#2022-03-2316:12Björn EbbinghausBut I think it would be better to separate that...
1. Query for Threads
2. sort, filter, whatever them
3. Pull additional stuff like the specific messages later.#2022-03-2316:13zendevil.ethI still get the inst-ms error with:
(defn sent
"get user sent threads"
[user & [{:keys [page]}]]
(let [page (or page 0)
db-conn (db/_d)
to-drop (* messages-per-page page)]
(->> (d/q
'[:find ?e (pull ?e [:message/_thread]) (max ?createdAt)
:in $ ?user
:where
[?e :thread/users ?user]
[?m :message/thread ?e]
[?m :ent/created_at ?createdAt]
(not [?user :user/deleted_thread ?e])
[?user :user/read ?e]] db-conn
(:db/id user))
(sort-by #(inst-ms (last %)))
reverse
(drop to-drop)
(take messages-per-page)
#_(map #(db/by-id (first %) db-conn)))))#2022-03-2316:19Björn EbbinghausWhat is: "the inst-ms error" ?
Maybe look into what d/q actually returns?#2022-03-2316:47zendevil.ethI don’t understand this, so this:
(defn sent
"get user sent threads"
[user & [{:keys [page]}]]
(let [page (or page 0)
db-conn (db/_d)
to-drop (* messages-per-page page)]
(->> (d/q
'[:find ?e (pull ?e [:message/_thread]) (max ?createdAt)
:in $ ?user
:where
[?e :thread/users ?user]
[?m :message/thread ?e]
[?m :ent/created_at ?createdAt]
(not [?user :user/deleted_thread ?e])
[?user :user/read ?e]] db-conn
(:db/id user))))))
returns:
[[#:message{:_thread [#:db{:id 17592186048681} #:db{:id 17592186048686}]} 17592186048679] ...]
But when I remove the pull:
(defn sent
"get user sent threads"
[user & [{:keys [page]}]]
(let [page (or page 0)
db-conn (db/_d)
to-drop (* messages-per-page page)]
(->> (d/q
'[:find ?e (max ?createdAt)
:in $ ?user
:where
[?e :thread/users ?user]
[?m :message/thread ?e]
[?m :ent/created_at ?createdAt]
(not [?user :user/deleted_thread ?e])
[?user :user/read ?e]] db-conn
(:db/id user))))))
it returns:
[[17592186048679 #inst "2022-03-17T14:18:58.112-00:00"] ...]
Why is it not returning
[[17592186048679 #:message{:_thread [#:db{:id 17592186048681} #:db{:id 17592186048686}]} #inst "2022-03-17T14:18:58.112-00:00"] ...]
as expected in the first case?
Is this a bug in datomic?#2022-03-2316:55Björn EbbinghausAre you sure you executed the right code? Your first example looks of...
Try this:
(defn sent
"get user sent threads"
[user & [{:keys [page]}]]
(let [page (or page 0)
db-conn (db/_d)
to-drop (* messages-per-page page)]
(->>
(d/q
'[:find ?e (max ?createdAt)
:in $ ?user
:where
[?e :thread/users ?user]
(not [?user :user/deleted_thread ?e])
[?user :user/read ?e]
[?m :message/thread ?e]
[?m :ent/created_at ?createdAt]]
db-conn
(:db/id user))
(sort-by second)
(map first)
(drop to-drop)
(take messages-per-page)
(d/pull-many db-conn ['* {:message/_thread ['*]}]))))#2022-03-2316:56Björn EbbinghausLike I said: Try to separate finding the threads from pulling the messages.#2022-04-0718:14jacob.maine@U01F1TM2FD5 IIRC I ran into this long ago. The explanation back then was that there’s a limitation (bug) in Datomic that you can’t reference an entity more than once in a :find expression, or at least in certain situations. The recommendation back then was to convert
:find ?e (pull ?e [:message/_thread]) (max ?createdAt)
into
:find (pull ?e [:db/id :message/_thread]) (max ?createdAt)
That will get you results like this:
[[{:db/id 17592186048679 :message/_thread [#:db{:id 17592186048681} #:db{:id 17592186048686}]} #inst "2022-03-17T14:18:58.112-00:00"] ...]
And you’ll be able to get the entity id out of the first element.#2022-03-2412:25manutter51I’m trying to take a backup of a remote (QA) datomic on-prem instance, which I’ve done many times before, but now I’m getting a stack dump headed by java.sql.SQLException: No suitable driver. I’m using the same database URI as the QA web server, and I do not get this error when backing up a local datomic instance. The QA web server is on a different host from the QA datomic instance. It seems like I can verify that each of the individual pieces are correct: my local datomic install works, the URI works, the connection works over the network, etc. Any idea where I can start looking to debug this? I’ve done some googling, but nothing helpful has turned up so far.#2022-03-2412:51favilaYou’re sure that the jdbc driver you need is on the classpath of the backup process?#2022-03-2412:54manutter51Yeah, because it works when I back up my local datomic instance#2022-03-2412:54manutter51At least I think I’m sure, based on that.#2022-03-2412:55favilawhat is “local datomic” here, a sql database of the same kind as the remote one?#2022-03-2412:55favilae.g. both are postgres, or both are mysql, etc#2022-03-2412:56manutter51Hmmm, ok, that’s a good point. Now that I think about it, I believe my local dev box is just using h2 instead of MSSQL. So now I’m no longer sure I do have the right driver on the path. It always used to work correctly though, and I certainly haven’t deleted any drivers from the path.#2022-03-2412:57manutter51Let me dig into this some more, thanks for poking at my assumptions.#2022-03-2412:57favilathe datomic distro does not include mssql out of the box{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-03-2412:58favilamaybe you previously used a patched distro (copied the driver jar into libs or something), but upgraded versions and forgot this step?#2022-03-2412:58favilawhatever was done to the datomic distro that is running the transactor was probably not done to the directory running the backup#2022-03-2412:58manutter51Yeah, or some recent update messed with my predefined paths or something. Time for the sleuthing hat.#2022-03-2413:04manutter51Ok, that was it. Copied my SQLServerDriver-6.0.jar from the .m2 directory into datomic/lib, and it’s working now. I was getting stuck because I forgot my local backing store was H2 instead of MSSQL, so I was assuming all my drivers were in place.#2022-03-2413:05manutter51Thanks for setting me straight.#2022-03-2416:32zendevil.ethIs it possible to make two (transact …) atomic, i.e., either both transact or both fail if one fails?#2022-03-2416:33Adam Lewisall atomicity must exist within the boundary of a single transaction#2022-03-2416:34Adam Lewisbut if you serialize two transactions, you can make the second depend on the first. but no way to, e.g. roll-back the first if the second fails#2022-03-2416:35Adam Lewisthere are some techniques you can use to manage this, however...trying to find some references#2022-03-2416:36zendevil.ethBasically I want to make these two atomic:
(transact (assoc {:thread/users
[(:db/id current-user)
to]
:thread/subject subject}
:ent/created_at now
:db/id temp-id))
(transact {:db/id current-user
:user/read true})
#2022-03-2416:37jacekschaeYou could put two maps {} in one transact ex. https://github.com/jacekschae/learn-datomic-course-files/blob/f2378c84bade5cb64018f72aa9179a8c8bb25df4/increments/complete/src/main/cheffy/conversation/db.clj#L44#2022-03-2419:37zendevil.ethI have the following:
@(d/transact-async
@conn
[(assoc params
:ent/created_at now
:db/id temp-id)
(map (fn [to]
[:db/retract to :user/read (:message/thread params)])
(:message/to params))])
but I’m getting:
{:error "java.lang.IllegalArgumentException: :db.error/invalid-lookup-ref Invalid list form: [:db/retract 17592186045444 :user/read 17592186045450]"}
Why is that the case?#2022-03-2419:38kenny@(d/transact-async
@conn
(into [(assoc params
:ent/created_at now
:db/id temp-id)]
(map (fn [to]
[:db/retract to :user/read (:message/thread params)])
(:message/to params))))#2022-03-2522:42Björn Ebbinghaus@ps
Did you know that you probably don't need the
Datomic already stores the time for each transaction, and you can query for the transaction where a fact was added.
The following query finds the last time the
[:find ?message ?created-at
:in $ ?message
[?message :message/id _ ?tx]
[?tx :db/txInstant ?created-at]]
#2022-03-2522:55kennyGenerally, I would not recommend using transaction time to model domain information.#2022-03-2523:03Björn EbbinghausYou are absolutely right. 🙂
Honestly, I am a bit confused why I wrote that message. I don't even do it myself...
Time to go to bed, I guess.#2022-03-2419:41ghadi@ps first step of debugging is to take things apart#2022-03-2419:41ghadipull out the tx-data expression out of the call#2022-03-2419:41ghadiand look at it#2022-03-2419:42ghadikenny is distributing fish, but better to teach a person to fish{:tag :div, :attrs {:class "message-reaction", :title "laughing"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😆")} " 1")}
#2022-03-2523:21jasonjcknMaybe an easy question, new to datomic,
Let’s say i’m implementing an idempotent API, ontop of datomic
PUT /component/{name}
In the case where there already exists a component by the name name , we update the record, e.g.
(d/transact! conn [{:db/id [:c/name name], ...}])
In the case where it doesn’t exist we create a new record, with new UUID, e.g.
(d/transact! conn [{:c/myid (d/squuid), :c/name name, ...}])
So how do I write the logic that determines whether to (1) create a new record, or (2) update an existing one as above, if I put an if/else branch that checks whether ahead of time whether the record already exists, that’s using a stale snapshot of the database, so by the time a branch is selected, I may be on the wrong branch.#2022-03-2602:18Drew VerleeJust insert the update/record/datom/fact and provide the unique idenitfier. If it exists, it will be updated, if it doesn't it will add that and so now it's created.
Or at least that's how it looks from my couch without consulting the docs.#2022-03-2603:08jasonjcknif it already exists i'll get an illegal state because d/squuid will generate a different value than what's already in the database #2022-03-2608:58jacekschaeYou could take a look at https://docs.datomic.com/cloud/transactions/transaction-functions.html maybe that will help?{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-03-2609:27jacekschaeOne more thing. If the name is a unique then you should also model this in the schema by using :db/unique :db.unique/identity then when you have that it will either create a new record or update existing. In this case you won't have to use transaction functions, which come with their own tradeoffs.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-03-2616:12jasonjckn@U8A5NMMGD Sorry I still don’t see it, the :db.unique/identity is exactly why you would reach an illegal state exception, because you’d be trying to transact a newly generated (d/squuid) with an already existing :c/name and existing :c/myid#2022-03-2616:13jasonjcknthe ‘if else branch’ is because when create a record you generate a new UUID under :c/myid, when you update an existing record, you omit that as part of the transact! because it’s already generated. but now we’re back to a race condition.#2022-03-2616:16jasonjcknin SQL , the equivalent would be an INSERT with an ON CONFLICT DO UPDATE SET …#2022-03-2616:16jasonjcknor even just grouping a series of statements in a transaction.#2022-03-2617:44jasonjcknseems like transaction functions would solve this , thanks #2022-03-2618:13Drew VerleeHere is the docs
> If a transaction specifies a unique identity for a temporary id, and that unique identity already exists in the database, then that temporary id will resolve to the existing entity in the system. This upsert behavior makes it possible for transactions to work with domain identities, without ever having to specify Datomic entity ids.#2022-03-2618:18Drew VerleeIt should resolve and then it says "upsert behavior" which feels like what you want.#2022-03-2618:22jacekschae@U0J3J79FE seems like I didn't understand what you were trying to do clearly.{:tag :div, :attrs {:class "message-reaction", :title "pray"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙏")} " 1")}
#2022-03-2721:24steveb8nQ: I want to query “last-updated” date-time for n entities. it’s not obvious how to do this because a change to any attribute in the entity is an “update”. has anyone found a way to do this?#2022-03-2722:19kennyExplicitly model it. Don't use Datomic’s history for this. #2022-03-2806:43steveb8nI have done that for parent entities when children are updated. why model this instant value when it is in history?#2022-03-2813:23kennyBecause you have control over it. #2022-03-2822:26steveb8nfair enough. having thought about it a bit, I’m going to follow your advice. thx for the nudge#2022-03-2911:30Ivar RefsdalPS. Don't forget that two (Date.) may be equal, i.e. refer to the same millisecond.
Thus if you are asserting:
:e/updated (Date.) twice fast enough on the same entity, you may loose data.
If you are not loosing data now, you may in the future as latency decreases.
I'm not sure it's a very good idea to explicitly model this.
To me it seems this should be something that Datomic "can do".
I have solved this (for certain types of data) in https://github.com/ivarref/rewriting-history. You can have a look at the function pull-flat-history.
This allows you to get the (relative) latest update of some entity.#2022-03-2911:31Ivar RefsdalWarning. I'm pretty sure pull-flat-history is slow.#2022-03-2815:39SimonAnybody in copenhagen working with #datomic ?#2022-03-2914:32Daniel JompheNow that Clojure 1.11 is out, our deployment script for Datomic Cloud Ions is broken.
What broke it is that our CI-CD deployment script installed Clojure 1.11 instead of 1.10 to run its jobs.
I'll simply find a way to pin it to 1.10, since anyways we can't use Clojure 1.11 for now in our Ion-based project.
I wanted to let you know. Here's the interesting bits (in the thread)...#2022-03-2914:34Daniel Jompheclojure -M:only/bin -m <elided>
Downloading: org/clojure/clojure/1.11.0/clojure-1.11.0.pom from central
Downloading: org/clojure/core.specs.alpha/0.2.62/core.specs.alpha-0.2.62.pom from central
Downloading: org/clojure/spec.alpha/0.3.218/spec.alpha-0.3.218.pom from central
Downloading: org/clojure/spec.alpha/0.3.218/spec.alpha-0.3.218.jar from central
Downloading: org/clojure/clojure/1.11.0/clojure-1.11.0.jar from central
Downloading: org/clojure/core.specs.alpha/0.2.62/core.specs.alpha-0.2.62.jar from central
⚙ deployment script ready to be used.
🎯 OPERATING :push - - - - - - - - - -
clojure -M:ion-dev {:op :push}
⏳
✍️ REQUEST OUT - - - - - - - - - - - -
{:command-failed "{:op :push}",
:causes
({:message
"Attempting to call unbound fn: #'cognitect.s3-libs.file/abs",
:class IllegalStateException})}
✍️ REQUEST ERR - - - - - - - - - - - -
"WARNING: abs already refers to: #'clojure.core/abs in namespace: cognitect.s3-libs.file, being replaced by: #'cognitect.s3-libs.file/abs"#2022-03-2914:35Daniel JompheNot sure yet why this downloaded 1.11.
In deps.edn, out dep on Clojure is:
org.clojure/clojure {:mvn/version "1.10.3"}#2022-03-2914:40Daniel JompheOk, that's because the aliases behind -M:only/bin and -M:ion-dev above are defined as such in our project's deps.edn.
In effect, it removes the dependency on clojure 1.10.3, so when the Clojure CLI runs, it must decide to download the latest clojure because I "forgot" to specify the dependency on clojure...
:only/bin {:replace-paths ["bin"]
:replace-deps {}}}
:ion-dev {:replace-deps {com.datomic/ion-dev {:mvn/version "1.0.298"}}
:main-opts ["-m" "datomic.ion.dev"]}#2022-03-3013:24xcenoI was about to upgrade to 1.11 tomorrow.
Are you saying ions simply don't work with 1.11, or does this just affect your specific deployment?#2022-03-3013:25Daniel JompheThere's two considerations:
1. What clojure version you use to operate your ion deployments.
2. What clojure version you use in your ion clojure processes.#2022-03-3013:26Daniel JompheMy post was about 1.
As for 2., even if your project's deps.edn would specify 1.11 as a dep for your ion clojure processes, your clojure code will get loaded in the Datomic Cloud Ion process already running with 1.10.3. During the deployment you'll get a warning that your choice of 1.11 was overridden by what's available, 1.10.3.#2022-03-3013:27xcenoAhh I see, thanks for clarifying that!{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-3"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 1")}
#2022-03-3013:28Daniel JompheSo you need to keep your ion app with 1.10.3 until Cognitect announces an upgrade is available with a new cloudformation to upgrade your Datomic Cloud cluster (making sure they mention they moved to 1.11).#2022-03-3014:09jaret@U0514DPR7 Thanks for reporting this! We are working on a fix 🙂. Apologies for the frustration this might have caused.#2022-03-3014:10Daniel JompheNone at all, Jaret. Thanks for letting us know about the WIP for a fix. 🙂#2022-04-0414:22jaret@U0514DPR7 We released a fix for this today (upgrade to this latest version of ion-dev): https://forum.datomic.com/t/ion-dev-1-0-304/2063.{:tag :div, :attrs {:class "message-reaction", :title "tada"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🎉")} " 1")}
#2022-04-0414:22Daniel JompheThanks for the notice Jaret!#2022-03-3020:09geraldodevCan you provide the steps to import https://github.com/Datomic/mbrainz-sample into datomic-free, There is https://github.com/Datomic/mbrainz-importer . Can it be used ? Could you provide the changes ?#2022-03-3114:39jaretGerald, backup and restore are not included in Datomic free. And shoe horning is possible with importer, but I wouldn't want you to spend the time to do that unless you think it's absolutely necessary to use free to explore Mbrainz.
Why do you want to use free? To potentially provide clarity, Datomic Pro Starter has no cost. And is intended to allow devs to explore. It includes a perpetual license that is good for all releases of Datomic released before the expiration date. No purchase is required. You simply create an account with my.datomic at https://my.datomic.com/account/create#2022-03-3114:39jaretThen agree:#2022-03-3114:39jaret#2022-03-3114:40jaretand get a license key + access to download the datomic.zip.#2022-03-3114:41jaretDatomic Free is intended for distributed uses of Datomic and does not have all the features of Datomic Pro. and I'd recommend if your use case is exploration to just sign up for pro-starter and then you can use the instructions under the README in the https://github.com/Datomic/mbrainz-sample.#2022-03-3114:41jaretHappy to chat if you have concerns or a different use case and you can always hit up <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> with questions/concerns.#2022-04-0112:08geraldodev@U1QJACBUM I wanted to use datomic-free to test https://github.com/wilkerlucio/pathom3-datomic . Thank you for your advice.#2022-03-3108:23dazldI occasionally see :valcache/item-too-large in logs - I’m wondering if this means I’ve misconfigured something. any ideas? I don’t see it being mentioned before on slack, either.#2022-03-3114:32jaretItem-too-large for Memcached and Valcache indicate the same thing. The error indicates that a segment was built by Datomic that was too big to store in memcached/valcache. Having these occasionally in your logs is not anything to be concerned about, however, if you create something like a super long string/blob on an attribute you can make a segment that is too large to fit in your cache. I know off the top of my head that the fixed size limit for memcached is 1MB.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-04-0113:52Daniel JompheI'm wondering if Cognitect now uses OpenTelemetry to track each one of our Datomic Cloud deployments... I suppose and hope not (hmm, yeah that's safe to assume), because I've begun testing setting this OverrideSettings for the Datomic compute group CloudFormation parameter:
export OTEL_EXPORTER_OTLP_HEADERS="x-honeycomb-team=<API_KEY>,x-honeycomb-dataset=<DATASET>"; export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=; export OTEL_RESOURCE_ATTRIBUTES="service.name=<NAME>"
Just checking in, in case someone has tips or warnings to share. 🙂#2022-04-0413:12jaretHi @U0514DPR7,
> I'm wondering if Cognitect now uses OpenTelemetry to track each one of our Datomic Cloud deployments...
No we do not track customers or perform any customer monitoring. In fact, Marketplace specifically requires sellers not to monitor their customers:#2022-04-0414:31Daniel JompheThanks Jaret. I realize what I wrote might seem sensitive.
My focus was more on the technical aspect and consequences of using OverrideSettings the way we did.#2022-04-0418:05Joe LaneHey @U0514DPR7!
Telemetry insights (via OTEL or vanilla HoneyComb) is an exciting addition to any project.
The use of OverrideSettings is only intended for support scenarios with guidance from Datomic Support and using Cloud like this is considered an "unsupported" configuration.
My recommendation would be to keep things in userspace by:
• Depending on the OTEL library of your choice explicitly in your deps.edn
• Store these values in SSM and fetch them using https://docs.datomic.com/cloud/ions/ions-reference.html#get-params
Having worked on a project with HoneyComb before, I'm very excited for you.#2022-04-0418:40Daniel JompheHi @U0CJ19XAM, thanks a lot for your response!
So there's 3 ways to configure OTel, and I think you suggest we use the 3rd one. Can you confirm?
1. https://opentelemetry.io/docs/instrumentation/java/automatic/ I think we'd like to use this mode. But I suppose Datomic Cloud will "never" allow it or offer an option to boot with it?
2. https://github.com/open-telemetry/opentelemetry-java/tree/main/sdk-extensions/autoconfigure: For this mode, we must depend explicitly on OTel's libs in our deps.edn. And upon loading, it needs to find some config envars. That's what we set up with for now. This seemed like the best next thing to try.
3. https://opentelemetry.io/docs/instrumentation/java/manual/: For this mode too, we must depend explicitly on OTel's libs in our deps.edn. And then we must expend the most energy manually instantiating and configuring every desired component, as soon as possible during the app startup.
We might see this as 3 levels of easiness and control.
I wonder if there are practical reasons why you seem to steer us towards #3 and block us from #1 and #2. Probably a bunch of reasons. After all, Datomic Cloud Ion is a highly opinionated PAAS. (I've also developed for Google App Engine, I'm used to have to go back to the drawing board to adapt to such environments.)
Also, I wonder why Datomic Cloud doesn't expose some control knobs for us to define envars. It might be a good practice in cloud architecture to not depend on those, and instead strive to use SSM in all possible cases... Still, any library that we might want to use that depends on the presence of a regular-style envar will require adaptation to such an environment.
Thinking about it, I'm not sure I should complain. 🙂
• On one hand, it forces us to slow down and spend much more time on this kind of things than we'd think we'd like to do.
• On the other hand, this all might be forcing good habits on us that (it seems) I'm not yet entirely equipped to foresee the value. And it might save us from loading many useless java packages...
Overall I believe we'll continue with solution #2 for now, and depending on your answers, we might invest later in migrating to #3 once we've sufficiently installed the observability we want to achieve quickly.#2022-04-0418:49Joe Lane@U0514DPR7 (untested advice) Have you investigated invoking (System/setProperty "otel.exporter.otlp.headers" "x-honeycomb-team=<API_KEY>") before "loading" (instantiating?) your autoconfigure option (#2)?#2022-04-0418:51Joe Lane^^ This should support #2 and #3#2022-04-0418:54Daniel Jomphe@U0CJ19XAM I feared I might set the property too late for the lib to pick it up, and expose us to config race conditions. But I realize if I loaded a clojure namespace first in our ion manifest namespace, I'd be able to guarantee the property is set before our first use of OTel. This might be a valid way to set it up. (Although 10 years ago when I was still doing Java, I remember not being satisfied by setProperty.)#2022-04-0306:22Shuky BadeerHi guys! A performance related question here..
This query takes 7-9 seconds to finish executing. It's doing a simple SQL-like join on a dataset that is 11,000 lines big. We're pulling the relevant attributes of each entity (plus the rest of attributes that were cut from the image). Is 7-9 seconds normal for a query like that?#2022-04-0307:25domparryHard to give an answer without knowing anything about the infrastructure. What is datomic running on? 11000 lines should almost be able to fit in memory.#2022-04-0309:26dazlddid you add an index to both the id and the belongs_to attributes?
I guess another thing to do is to build it back up from the simplest possible query to see which part is taking all the time.#2022-04-0309:36pithylessAlso, do you actually need the first join? You don't seem to be using ?data_id anywhere. If every ?sfd has a :strive_form_data/id attribute, you can simplify to just matching for:
[?sfdaa :strive_form_data_additional_answers/belongs_to ?sfd]
#2022-04-0309:37pithyless^ And if the above is true, you could simplify it to just fetch directly via the index:
(->> (d/datoms :avet :strive_form_data_additional_answers/belongs_to)
(map pull-many ,,,))
{:tag :div, :attrs {:class "message-reaction", :title "ok_hand"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👌")} " 1")}
#2022-04-0309:37pithylessAlso, you can consider if you need two pulls, since one is just a nested data of the other.#2022-04-0309:45pithylessRemember that Datomic does not have a query optimizer, so in general when queries run slow make sure the correct indices are in place AND the order of where clauses reduces the number of matches in the working set. (e.g. are there fewer :strive_form_data/id or :strive_form_data_additional_answers/belongs_to datoms? perhaps re-ordering the clauses would help? etc.)#2022-04-0309:48pithylessSpeaking of which, I have my suspicions of that double-pull: won't that create a separate entry for each [?sfd ?sfdaa]? So if sfd1 has 3 sfdaa's, you will see: [(pull sfd1) (pull sfdaa1)] [(pull sfd1) (pull sfdaa2)] [(pull sfd1) (pull sfdaa3)] ?#2022-04-0408:46BenjaminJo I have an issue when pushing an ion with a git dep
org.sg.reply-bot/reply-bot {:git/url "
Cloning:
it tries to git clone I recognize the error. It correctly recognizes that the dep needs to be updated (I bumped the sha){:tag :div, :attrs {:class "message-reaction", :title "white_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✅")} " 1")}
#2022-04-0413:14Alex Miller (Clojure team)This sometimes happens due to concurrency issues during parallel download. What version of Clojure CLI are you using?#2022-04-0413:52BenjaminClojure CLI version 1.10.3.1075#2022-04-0413:57BenjaminI'll upgrade#2022-04-0414:03Alex Miller (Clojure team)I don't think there's any relevant changes since that version#2022-04-0414:03Alex Miller (Clojure team)this is a known issue#2022-04-0414:03Alex Miller (Clojure team)if you push again, does it work?#2022-04-0414:05Benjaminsec#2022-04-0414:06Benjaminseems to be consistent - tried 3 times#2022-04-0414:14Alex Miller (Clojure team)you might try cleaning all or some of ~/.gitlibs - you might have a dir there that is empty that's blocking a download. it's possible that was creating during an earlier version due to this issue too#2022-04-0414:16BenjaminI check#2022-04-0414:22Benjaminhehe looks promising but now I have this issue:
{:command-failed
"{:op :push, :creds-profile \"supportbot-test\", :region \"us-east-1\"}\n",
:causes
({:message "Invalid remote: origin", :class InvalidRemoteException}
{:message
"
not sure if I need to fix my git / ssh config. I'll try to pull manually#2022-04-0414:24Alex Miller (Clojure team)yeah, it shouldn't be doing anything different than what you get outside the CLI#2022-04-0414:25Alex Miller (Clojure team)you can dump more info by setting export GITLIBS_DEBUG=true - that will dump every git command being run#2022-04-0414:25Benjaminok sec#2022-04-0414:32BenjaminCloning:
for what it's worth. I'll try change to http because this is a public repo anyway#2022-04-0414:35Alex Miller (Clojure team)hmm, this looks like you are now encountering an issue due to much older version of tools.deps used by ion dev tooling#2022-04-0414:35Alex Miller (Clojure team)make sure you use https !#2022-04-0414:35Alex Miller (Clojure team)http not supported by either tools.deps or github anymore#2022-04-0414:36BenjaminI mean https yea#2022-04-0414:36Benjaminso the github one works but I have a gitlab one
org.eclipse.jgit.api.errors.TransportException:
😛#2022-04-0414:37BenjaminI can clone that one manually#2022-04-0414:38Benjaminhttps://clojure.atlassian.net/browse/TDEPS-104?oldIssueView=true I try this#2022-04-0414:39Benjaminok this unkown host key issue is fixed with the ssh-keyscan command#2022-04-0414:50Benjaminalright pushing works, thanks a lot#2022-04-0415:54Alex Miller (Clojure team):thumbsup:#2022-04-0414:21jaretHowdy all! We released a fix to the issue with pushing ions on Clojure 1.11 https://forum.datomic.com/t/ion-dev-1-0-304/2063{:tag :div, :attrs {:class "message-reaction", :title "catjam"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "catjam", :src "https://emoji.slack-edge.com/T03RZGPFR/catjam/50c2b0ff9e925462.gif"}, :content nil})} " 2")}
#2022-04-0414:33icemanmeltingGuys, i am kinda new here, but I have been testing/using datomic recently, and I have noticed something, whenever I start the console, to get the ui to be able to query stuff in browser, the transactor just quits on me after some time. Has this ever happened to you? What am I doing wrong? Version I am using currently is 1.0.6344#2022-04-0414:47TwanI’m getting connection errors from 1 (and only one) machine in a cluster of 3 machines without any (as far as we can tell) relevant firewall/connection differences with regard to how they’re setup. Does the error below ring a bell for anybody? 🧵#2022-04-0414:47Twanuser=> (first (datomic.api/q '[:find ?e :where [?e :entity/id _]] (datomic.api/db (datomic.api/connect "datomic:sql://<mydb>?jdbc:postgresql://<myhost>:5432/<mypsqldb>?user=<mypsqluser>&password=<mypsqlpass>&sslmode=require"))))
Execution error (NullPointerException) at datomic.kv-cluster/kv-cluster (kv_cluster.clj:355).
Cannot invoke "clojure.lang.IFn.invoke()" because the return value of "clojure.lang.IFn.invoke(Object)" is null
#2022-04-0414:47TwanOn other machines, the result is just [entity-id]#2022-04-0414:48TwanWe granted 5432 and 5334 (our transactor port)#2022-04-0414:49TwanOur assumption is that discovery fails, however, when entering the wrong postgres password on purpose, we get:
2022-04-04 14:49:02.215+0000 WARN [nREPL-session-1e30a172-3410-4769-b4d8-2a90469391ce] [datomic.kv-sql-ext:54] - {:event :sql/validation-query-failed, :query "select 1", :pid 1, :tid 78}
nrepl.middleware.interruptible-eval/evaluate/fn interruptible_eval.clj: 91
clojure.core/eval core.clj: 3202
...
user$eval77425.invoke NO_SOURCE_FILE: 1
user$eval77425.invokeStatic NO_SOURCE_FILE: 1
datomic.api/connect api.clj: 15
datomic.Peer.connect Peer.java: 106
...
datomic.peer/connect-uri peer.clj: 748
datomic.peer/get-connection peer.clj: 666
datomic.peer/get-connection/fn peer.clj: 669
datomic.connector/resolve-name connector.clj: 71
...
datomic.cache/lookup-cache/reify/valAt cache.clj: 280
datomic.cache/lookup-cache/reify/valAt cache.clj: 287
...
datomic.cache/fn/reify/valAt cache.clj: 342
datomic.coordination/cluster-conf->resolved-conf coordination.clj: 157
datomic.coordination/create-system-cluster coordination.clj: 89
...
datomic.coordination-ext/fn coordination_ext.clj: 82
clojure.core/swap! core.clj: 2356
...
datomic.coordination-ext/fn/fn coordination_ext.clj: 86
...
datomic.require/require-and-run require.clj: 17
datomic.require/require-and-run require.clj: 22
clojure.core/apply core.clj: 667
...
datomic.kv-sql-ext/kv-sql kv_sql_ext.clj: 91
datomic.kv-sql-ext/cluster-conf->spec kv_sql_ext.clj: 82
...
clojure.core/memoize/fn core.clj: 6342
clojure.core/apply core.clj: 667
...
datomic.kv-sql-ext/fn kv_sql_ext.clj: 76
datomic.kv-sql-ext/try-validation-query kv_sql_ext.clj: 47
datomic.sql/connect sql.clj: 16
org.apache.tomcat.jdbc.pool.DataSourceProxy.getConnection DataSourceProxy.java: 125
org.apache.tomcat.jdbc.pool.DataSourceProxy.createPool DataSourceProxy.java: 101
org.apache.tomcat.jdbc.pool.DataSourceProxy.pCreatePool DataSourceProxy.java: 114
org.apache.tomcat.jdbc.pool.ConnectionPool.<init> ConnectionPool.java: 135
org.apache.tomcat.jdbc.pool.ConnectionPool.init ConnectionPool.java: 479
org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection ConnectionPool.java: 616
org.apache.tomcat.jdbc.pool.ConnectionPool.createConnection ConnectionPool.java: 684
org.apache.tomcat.jdbc.pool.PooledConnection.connect PooledConnection.java: 175
org.apache.tomcat.jdbc.pool.PooledConnection.connectUsingDriver PooledConnection.java: 266
org.postgresql.Driver.connect Driver.java: 260
org.postgresql.Driver.makeConnection Driver.java: 458
org.postgresql.jdbc.PgConnection.<init> PgConnection.java: 217
org.postgresql.core.ConnectionFactory.openConnection ConnectionFactory.java: 49
org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl ConnectionFactoryImpl.java: 197
org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect ConnectionFactoryImpl.java: 146
org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication ConnectionFactoryImpl.java: 525
org.postgresql.util.PSQLException: FATAL: password authentication failed for user "<mypsqluser>"#2022-04-0414:52TwanFrom the transactor’s POV, nothing is reported (no errors, no info)#2022-04-0717:55TwanWe ran both Datomic free and Datomic pro in this project. As a result, it would sometimes pick free and sometimes pro when resolving Datomic api queries, depending on loading order. On free they'd fail.#2022-04-0506:58Shuky BadeerHi guys, I wanna restrict the query based on "<" predicate.
Submitted at is a date in epoch format so it's an integer. I know that i can use ?nps based on trasnaction time but in this specific case i can't use it because this data was transferred from a different db.
How do i restrict based on an attribute?#2022-04-0507:18favila[??? :strive_form_data/submitted_at ?some-value][(< ?some-value 1)] will work. But only you know what the ??? should be.{:tag :div, :attrs {:class "message-reaction", :title "heart"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("❤️")} " 1")}
#2022-04-0507:21Shuky Badeer@U09R86PA4 amazing thank you so much! Can i ask, if i wanted some-value to be between 1 and 100 for example, how would i go about doing that? Using clojure syntax inside the query keeps leading to an error#2022-04-0507:22favila[(<= 1 ?some-value)][(<= ?some-value 100)]#2022-04-0507:23favila<= < > >= = != are special in queries. They’re not the normal clojure comparators#2022-04-0507:24favilahttps://docs.datomic.com/on-prem/query/query.html#built-in-expressions#2022-04-0507:24Shuky BadeerCool that's how i did it. But i was afraid it would be inefficient since this basically does an implicit join?#2022-04-0507:25favilaall binding does an implicit join#2022-04-0507:26favilahow do you get the ?some-value we’ve been talking about in your query?#2022-04-0507:26favilais it just [?nps :strive_form_data/submitted_at ?some-value]?#2022-04-0507:26favilaor is it on some other entity?#2022-04-0507:29Shuky BadeerOhh ok. Yes exactly how you just said#2022-04-0507:30favilaso the entity id is already known and bound; this is just retrieving the value and filtering#2022-04-0507:31favilai.e. applying two predicates#2022-04-0507:32favilathis query is already scanning all answers/belongs_to#2022-04-0507:33favilaare either the second or third clause indexed?#2022-04-0507:33favilahttps://docs.datomic.com/on-prem/best-practices.html#most-selective-clauses-first#2022-04-0512:13Shuky Badeer@U09R86PA4 thank you very much! I wanted to ask, is there a way to do computation at the datomic query level? For example in sql we can use functions that divide numbers and uses the result as input of a child query. Is that possible in datalog as well?#2022-04-0512:51favilaQuery rules are one way. You can also (on on-prem) call any function{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-04-0512:54favilaRoughly.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-04-0514:47Daniel JompheWith Datomic Cloud, does it ever happen to you that you can't deploy anymore because of no more space left on the disk attached to the instance? Automatic rollbacks also then fail for the same reason. Did we do something we shouldn't??
Cannot allocate memory#2022-04-0514:50Daniel Jomphe#2022-04-0515:09Daniel JompheObviously, those zip files aren't in cause - they'less than 1 MB.
And the dependencies in .m2 and gitlibs that must be copied from the S3 bucket don't weigh more than 150 MB.#2022-04-0516:42icemanmeltingOk, so regarding the problem I mentioned yesterday, I have more input, I get this on the transactor side:
o.a.activemq.artemis.core.client - AMQ212037: Connection failure to /172.20.0.4:57298 has been detected: AMQ229014: Did not receive data from /172.20.0.4:57298 within the 10,000ms connection TTL. The connection will now be closed. [code=CONNECTION_TIMEDOUT]
2022-04-05 16:29:14.005 WARN default o.a.activemq.artemis.core.server - AMQ222061: Client connection failed, clearing up resources for session 9d3f8e8c-b4f8-11ec-bb6d-0242ac140004
2022-04-05 16:29:14.014 INFO default datomic.update - {:task :reader, :event :update/loop, :msec 2110000.0, :phase :end, :pid 1, :tid 34}
2022-04-05 16:29:14.015 WARN default o.a.activemq.artemis.core.server - AMQ222107: Cleared up resources for session 9d3f8e8c-b4f8-11ec-bb6d-0242ac140004
2022-04-05 16:29:14.198 WARN default o.a.activemq.artemis.core.server - AMQ222061: Client connection failed, clearing up resources for session 9d435f1d-b4f8-11ec-bb6d-0242ac140004
2022-04-05 16:29:14.199 WARN default o.a.activemq.artemis.core.server - AMQ222107: Cleared up resources for session 9d435f1d-b4f8-11ec-bb6d-0242ac140004
2022-04-05 16:29:14.560 INFO default o.a.activemq.artemis.core.server - AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.17.0 [9a4ab8f7-b4f8-11ec-85e0-0242ac140003] stopped, uptime 35 minutes
And this on the peer side:
clojure.lang.ExceptionInfo: Error communicating with HOST 0.0.0.0 or ALT_HOST 172.20.0.3 on PORT 4334
at datomic.connector$endpoint_error.invokeStatic(connector.clj:53)
at datomic.connector$endpoint_error.invoke(connector.clj:50)
at datomic.connector$create_hornet_factory.invokeStatic(connector.clj:134)
at datomic.connector$create_hornet_factory.invoke(connector.clj:118)
at datomic.connector$create_transactor_hornet_connector.invokeStatic(connector.clj:308)
at datomic.connector$create_transactor_hornet_connector.invoke(connector.clj:303)
at datomic.connector$create_transactor_hornet_connector.invokeStatic(connector.clj:306)
at datomic.connector$create_transactor_hornet_connector.invoke(connector.clj:303)
at datomic.peer.Connection$fn__12046.invoke(peer.clj:217)
at datomic.peer.Connection.create_connection_state(peer.clj:205)
at datomic.peer$create_connection$reconnect_fn__12124.invoke(peer.clj:469)
at clojure.core$partial$fn__5857.invoke(core.clj:2627)
at datomic.common$retry_fn$fn__827.invoke(common.clj:543)
at datomic.common$retry_fn.invokeStatic(common.clj:543)
at datomic.common$retry_fn.doInvoke(common.clj:526)
at clojure.lang.RestFn.invoke(RestFn.java:713)
at datomic.peer$create_connection$fn__12126.invoke(peer.clj:473)
at datomic.reconnector2.Reconnector$fn__11300.invoke(reconnector2.clj:57)
at clojure.core$binding_conveyor_fn$fn__5772.invoke(core.clj:2034)
at clojure.lang.AFn.call(AFn.java:18)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
The system works fine for over half an hour, but then basically just dies with those 2 errors in the logs. On the transactor console output I just get Heartbeat failedAny ideas?#2022-04-0517:05favila172.20.0.4 != 172.20.0.3
#2022-04-0517:06icemanmeltingyes, but one is the peer, and the other is the transactor#2022-04-0517:06icemanmeltingdifferent machines#2022-04-0517:07icemanmeltingit works for like 30 mins or so, and then dies. I am using prod level jvm args, 4gb for heap#2022-04-0517:24favilaIs the system quiet in that time? Eg no transactions?#2022-04-0517:26favilaOn google cloud I remember some issue where their networking stack just drops idle tcp connections. I had to add keepalives into the kernel options somehow. IIRC it manifested like this, the peer looked like it went away and it only happened when the system was quiet#2022-04-0517:27favilaThis was more than 5 years ago, my memory is hazy#2022-04-0517:35icemanmeltingno, i am developing a data processing framework with clojure, and currently i am stress testing it, and I am writting the results to datomic#2022-04-0517:35icemanmeltingso every second it has 2 tx of 100 datoms to save#2022-04-0517:35icemanmeltingso basically 200 a second, splitted between 2 transactions#2022-04-0517:36icemanmeltingI should also mention that these are dockerized, and I have also modified the keep alive values, for both containers, to make sure this isn’t a tcp connection issue#2022-04-0517:39favilais it possible that the peer really did just go away for 10 secs, e.g. a long gc pause?#2022-04-0517:39icemanmeltingwell, on the transactor i have set the max gc pause to be 50ms#2022-04-0517:39icemanmeltingon the peer, I haven’t modified that#2022-04-0517:39icemanmeltingis that something one should do?#2022-04-0517:42favilathose targets don’t apply when there’s a full gc and memory pressure. I’m really just suggesting that if you know the peer is busy or could have memory pressure on it, rule out that the timeout isn’t due to a GC pause#2022-04-0517:43favilathere are jvm startup flags that will log GC pause activity#2022-04-0517:43favilato console or to a file#2022-04-0520:21icemanmeltingoh, ok, thanks for the pointers, much appreciated 🙂#2022-04-0520:21icemanmeltingI will check that out#2022-04-0520:18jaretHowdy all! https://forum.datomic.com/t/datomic-1-0-6397-now-available/2064{:tag :div, :attrs {:class "message-reaction", :title "tada"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🎉")} " 1")}
#2022-04-0521:48dazldin the same transaction, is it possible to both retract an older entity, and create a new one, where an identity attribute is shared between both?#2022-04-0521:49dazldI’m guessing no, as there’s no way to disambiguate which entity is being referred to via attribute identities.. ?#2022-04-0521:51dazldie:
[[:db.fn/retractEntity [:a/id1 "a"]]
[:db.fn/retractEntity [:a/id2 "b"]]
{:thing/id "thing"
:thing/stuff {:a/id1 "a"
:a/id2 "b"}}]
this can’t work, even with db/ids?#2022-04-0521:58favilaYou should run this to know for sure. I would expect all lookups to happen before retraction, so this will cause {:a/id1 "a" :a/id2 "b"} to expand to [:db/add id1a-entity-id :a/id1 "a"] etc. Since those same assertions are being retracted in the same transaction via the retractEntity, the transaction will fail with a conflict.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-04-0522:01favilaIn general all lookups happen on the “before” db and all operations in a transaction are applied atomically. (Exceptions are composite tuples, which do read an intermediate state to know what to update the new values to; and entity predicates, which read an “after” db right before commit.) So it’s not ambiguous at all what a lookup in a transaction will do.#2022-04-0522:02favilathere’s no way, even in a transaction function, to see a “in-progress” or “partially-applied” database value#2022-04-0608:25dazldit does indeed fail with a conflict - will try with concrete IDs too (doesn’t work, sadly)#2022-04-0612:19favilaconcrete ids will work IFF the entity ids for the retracts and the new assertions are different#2022-04-0611:16Shuky BadeerHi guys! Few days ago I asked a question about datomic here and someone brought up XTDB. I had a chance to look into it and it seems pretty cool. Does anyone has a good guide for how to set xtdb up on the cloud like GCP or AWS? Thanks a lot!#2022-04-0611:19tatutthere's a channel for #xtdb{:tag :div, :attrs {:class "message-reaction", :title "white_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✅")} " 1")}
#2022-04-0611:21Shuky BadeerThanks a lot!#2022-04-0615:01timoI took over a Datomic job and still a bit new to it. Is it bad for Datomic to transact huge amounts of data every night that is not changed? I read https://ag91.github.io/blog/2022/03/13/datomic-a-little-snippet-to-analyze-what-attributes-your-transactions-change-most-often/ and found out that there are more than 2 million transactions every night that only have a :txInstant in it and I have the feeling that this is not good... Do I need to sort out what to transact in my code or is there some kind of trick to it?#2022-04-0615:05emccueAs long as you can pay for the storage, conceptually its sound#2022-04-0615:05emccueyou are reasserting facts#2022-04-0615:05emccuesomeone else can probably accurately tell you if itll be a problem#2022-04-0615:05favilawell, if they’re empty transactions, it’s not known what facts are being asserted#2022-04-0615:05favilaif any#2022-04-0615:06favilathese could actually be submitted as empty transactions.#2022-04-0615:06favilaThe presence of the transaction itself could be a signal that a job was done, but for that to be useful signal the transaction would need some other metadata on it, not be completely empty.#2022-04-0615:07favilaI would say this is possibly a code smell, but operationally it’s not a problem in itself#2022-04-0615:07emccuei read it as re-transacting a bunch of data and the only difference was the timestamp#2022-04-0615:07timook, thanks. so it is a problem for the underlying storage like sql-db but not for datomic itself?! It is growing strong and needs to be contained.#2022-04-0615:08favilawell, is it? 2 million empty transactions is not going to take much space#2022-04-0615:08favilait’s only going to take log space, and won’t take any index space#2022-04-0615:08timoyeah, it is every night and the underlying oracle is more than 2tb already and growing fast#2022-04-0615:12favila> i read it as re-transacting a bunch of data and the only difference was the timestamp
From the tx log, you can’t tell the difference between this and (d/transact conn [])#2022-04-0615:12timoright, I am already checking for empty now#2022-04-0706:26Linus EricssonIf you want to avoid doing a lot of entirely empty transactions (which doesn't really give you much in terms of traceability) you should look in the db (d/db conn) for the data you try to upsert. If it is already there, you don't have to transact anything. But maybe you should create a transaction tagged with data that makes it apparent the system has made the integrity check of the data instead. This probably wont have to use 2 million transactions per night, though...{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-04-0811:48timoDoes it make history queries slower when there are retransactions every night with unchanged data?#2022-04-0811:53favilano, because there are no datoms#2022-04-0811:53favilathe only datoms are the :db/txInstant datoms#2022-04-0812:02timothanks#2022-04-0620:14Robert A. Randolphdev-tools release 0.9.72 - https://forum.datomic.com/t/cognitect-dev-tools-version-0-9-72-now-available/2066#2022-04-0811:48timoDoes it make history queries slower when there are retransactions every night with unchanged data?#2022-04-0711:41Ivar RefsdalI've started writing a blog post about how Datomic handles network read failures.
Would publishing it be in violation of EULA term
(j) publicly display or communicate the results of internal performance testing or other benchmarking or performance evaluation of the Software;?
Thanks.#2022-04-0720:26Robert A. RandolphIt is exciting that you are interested in blogging about Datomic, and we appreciate your inquiry regarding the EULA. We would be happy to review and make sure that the material is technically accurate and compliant with terms of use.
When you are ready for this process please create a support ticket (<mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>) with more details.#2022-04-0807:55Ivar RefsdalThanks. Will do.#2022-04-0714:40Lucas JordanI am using datomic cloud. I have a list of eids, I want to pull all of them in one query. How do I do that? (edited)#2022-04-0714:47favila(d/q '[:find (pull ?eid pull-expr) :in $ pull-expr [?eid ...] :where [?eid]] db ['*] your-eids)#2022-04-0714:51Lucas JordanThanks @U09R86PA4, that upgraded my brain on how datomic queries work 🙂#2022-04-0715:07ghadiyou can also map d/pull (with the same db!) over your eids#2022-04-0715:11Lucas Jordan@U050ECB92, thanks, yes. That is what I was originally doing, but sort of assumed the overhead would be higher (multiple calls). My assumption may be wrong.#2022-04-0817:54jasonjcknwhats the latest jdk version that works with peers #2022-04-0818:01favila17{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-04-0818:02favilathe latest release (few days ago) is the first to officially support 17 on peers and transactor#2022-04-0818:02favilabefore that release, unofficially, 17 works on the peer (I tried it) but it didn’t work on the transactor#2022-04-0818:25jasonjcknty#2022-04-0920:39nottmeyI just stumbled upon this https://forum.datomic.com/t/datomic-fulltext-search-equivalent/874 from 2019 and I’m wondering whether the statements regarding search are outdated or not.
To sum it up, when I follow the setup for datomic cloud, there will be no standard way of searching string based attributes in an indexed way? (Ofc. there is scanning the index with a custom predicate, but that’s not advisable, right?)
So when my frontend needs any reasonable search, the next big thing is to setup a third party search service? Are there common solutions or other options?#2022-04-0920:58ennHonestly you are best off using some secondary projection (like ES) anyway. Among its other limitations, the (on-prem) Datomic fulltext indexing can’t be removed on an attribute after being enabled, meaning that if you do go to ES down the road you’ll still be paying the cost of that indexing essentially forever.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-04-0921:40nottmeyyea, on-prem fulltext is a dead-end (it’s just nice because of the datalog predicate)
I’m more like wondering what “paved paths” exist as an alternative.#2022-04-0923:59tony.kayWe’re using ElasticSearch (Opensearch). It was too hard to hook a thread up to follow the transactions (via tx-range) to keep the indexes up-to-date. But yeah, it was an in-house solution and not something that was very “paved”{:tag :div, :attrs {:class "message-reaction", :title "ok_hand"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👌")} " 1")}
#2022-04-1111:15nottmeyIdea: Before leaving the comfy space of the datomic query engine for including full-text-search, I could (temporarily) use the schema of datomic for a rudimentary (non-scanning) search itself, right?
-> Tokenizing every searchable attribute and saving a relation between entity and token.
-> Then, on query, tokenizing the search request, matching the tokens and scoring the related entities based on the matches.
Has anyone heard of this approach before? Are there e.g. “datomic extensions” available for it?
(I know it would generate a lot more data, depending on how loosely you generate tokens. But I assume you would “pay” for this overhead anyways, by running ES somewhere else.)#2022-04-1111:15nottmeyIdea: Before leaving the comfy space of the datomic query engine for including full-text-search, I could (temporarily) use the schema of datomic for a rudimentary (non-scanning) search itself, right?
-> Tokenizing every searchable attribute and saving a relation between entity and token.
-> Then, on query, tokenizing the search request, matching the tokens and scoring the related entities based on the matches.
Has anyone heard of this approach before? Are there e.g. “datomic extensions” available for it?
(I know it would generate a lot more data, depending on how loosely you generate tokens. But I assume you would “pay” for this overhead anyways, by running ES somewhere else.)#2022-04-1004:07Drew Verleewould the recommended way to call a datomic ion on a set schedule (say once a month) be through aws eventbridge/cloudwatch?#2022-04-1023:16steveb8nThat's how I do it. Works well#2022-04-1102:21Drew Verlee@U0510KXTU thanks for the reply. Small hints like this really help a ton 🙂#2022-04-1116:20Adam LewisFor on-prem, is there any guarantee that a log value acquired after sync ... t will include t? That is assuming that log is called from the same thread which resolved the future returned by sync?#2022-04-1207:53dazldI had a bit of a weird one while developing an on-prem transactor function - perhaps it’s my lack of experience with them, but it was surprising. The function has a set as part of an argument map, and all worked fine on a forked local connection, or on an in memory db. However, when deployed to the transactor we started getting odd errors about trying to treat a set as a string. It was a classic error (below), but we couldn’t spot the error in our logic, and the tests were green.
Execution error (ClassCastException) at dev/eval118048$fn (form-init6861183615777452247.clj:1).
class java.util.HashSet cannot be cast to class java.lang.CharSequence (java.util.HashSet and java.lang.CharSequence are in module java.base of loader 'bootstrap')
The missing piece was figuring out that the types changed (perhaps due to de/serialization), such that coll? for example, returned false on something that originally had been a clojure.lang.PersistentHashSet - it had become a java.util.HashSet .
Adding a section to the tx function docs explaining what happens to data provided to the function would have helped. It was quite some debugging to figure it out, although the d/cancel api helped to bisect exactly which bit of code was blowing up.#2022-04-1212:31favilaPersistentVector can also become arraylist{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2022-04-1212:32favilaAnd this is indeed because of fressian serialization. It doesn’t have to be this way but these are the handlers they chose (maybe for performance)#2022-04-1314:17Ivar RefsdalI've also been bitten by this#2022-04-1314:18Ivar RefsdalI'm always processing arguments like this when writing transaction functions:
...
(:import (java.util HashSet List)
...
(defn to-clojure-types [m]
(walk/prewalk
(fn [e]
(cond (instance? HashSet e)
(into #{} e)
(and (instance? List e) (not (vector? e)))
(vec e)
:else e))
m)){:tag :div, :attrs {:class "message-reaction", :title "ok_hand"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👌")} " 1")}
#2022-04-1314:27favilaIt’s surprising but I’ve rarely found that it matters (which is why it’s annoying when it bites!)#2022-04-1314:27favilae.g. queries return HashSet/ArrayList at the top.#2022-04-1314:28favilaclojure interop with j.u.Collections is really good#2022-04-1317:20dazldin theory.. yes, but at least coll? returns false for java.util.HashSet#2022-04-1317:22dazldmap etc work fine, so perhaps it’s just an oversight? not sure.#2022-04-1317:23dazldI guess the big thing was that it works differently on the transactor, compared to in memory, which was really surprising.. hard to write tests that cover this without quite some gymnastics..#2022-04-1407:41Ivar RefsdalRunning a full fledged container with datomic would catch this, right?
I should look into clj-test-containers: https://github.com/javahippie/clj-test-containers{:tag :div, :attrs {:class "message-reaction", :title "yes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "yes", :src "https://emoji.slack-edge.com/T03RZGPFR/yes/7db5d0ba8bc231d1.png"}, :content nil})} " 1")}
#2022-05-1310:40Ivar RefsdalA month has passed already... I "solved" this problem in the following way using https://github.com/clojure/data.fressian:
(ns com.github.ivarref.add-fressian
(:require [clojure.data.fressian :as fress]
[datomic.api :as d]))
(defn transact [org-transact]
(fn [conn tx-data]
(org-transact conn (fress/read (fress/write tx-data)))))
(defn with-fressian [f]
(with-redefs [d/transact (transact d/transact)]
(f)))
and then inside my tests I have something like:
(test/use-fixtures :each add-fressian/with-fressian)
While this works the problem is still how Datomic works (and differs in networked database vs. local)#2022-04-1212:46Maciej SzajnaHi guys! A question about @(d/transact conn ..) and subsequent (d/db conn) : is the d/db guaranteed to see the effects of the transaction (T value equal or greater that the transaction?) I am aware d/transact returns the db value immediately after transaction completion, and I use it 99% of the time, but in this particular instance it's particularly inconvenient.
The question is really this: is it possible at all, through the JVM instruction reordering or any other kind of magic, that (do @(d/transact conn ..) (d/db conn)) might return a db value representing state before the transaction?
Edit: it's been answered before https://stackoverflow.com/questions/47693495/datomic-on-a-peer-does-connection-db-read-your-writes-after-connection-trans#2022-04-1212:56Maciej SzajnaOh, it's been answered before https://stackoverflow.com/questions/47693495/datomic-on-a-peer-does-connection-db-read-your-writes-after-connection-trans
The answer is: yes, the guarantee is there#2022-04-1217:59nottmeyWould anyone be interested in a library/example about this? 😄#2022-04-1218:09respatializedhttps://yyhh.org/blog/2021/11/t-wand-beat-lucene-in-less-than-600-lines-of-code/
This isn't Datomic, but it is an example of adding full text search to a different Datalog based DB.#2022-04-1218:17nottmeyoh nice, thanks for the tip#2022-04-1218:42nottmeythis actually sounds really promising 😳#2022-04-1415:25nottmeyWhat am I missing?
I have
{:db/ident :simple1/position+offset
:db/valueType :db.type/tuple
:db/tupleTypes [:db.type/long :db.type/long]
:db/cardinality :db.cardinality/many
:db/noHistory true}
and I’am doing this (like described https://docs.datomic.com/cloud/schema/schema-reference.html#heterogeneous-tuples)
(d/transact conn {:tx-data [{:db/id 114349209391875
:simple1/position+offset [0 6]}]})
but it throws this error 😕
> Execution error (IllegalArgumentException) at datomic.core.db/coerce-tuple (db.clj:2800).
> Don’t know how to create ISeq from: java.lang.Long#2022-04-1415:26ghadiyou have a cardinality many tuple#2022-04-1415:26ghadibut you are inputting a single tuple#2022-04-1415:26nottmeyahhh yes, that’s different, I expected it to work like in the example :man-facepalming:#2022-04-1415:26ghadiso it is looking for a Seq where it sees a 0{:tag :div, :attrs {:class "message-reaction", :title "ok_hand"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👌")} " 1")}
#2022-04-1515:40nottmey[:find ?e ?a ?v
:in $ [?a ...]
:where
[?e ?a ?v]
[(missing? $ ?e :simple1/match)]]
works fine
[:find ?e ?a ?v
:in $ [?a ...]
:where
[?e ?a ?v]
(or
[(missing? $ ?e :simple1/match)]
[(missing? $ ?a :simple1/match)])]
throws
> Execution error at datomic.core.datalog/compile-expr-clause (datalog.clj:1206).
> Unable to resolve symbol: $ in this context
Is there a way to make this work?{:tag :div, :attrs {:class "message-reaction", :title "white_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✅")} " 1")}
#2022-04-1515:50favila($ or …)?#2022-04-1515:53nottmeyseems to do something, but now i get
> Execution error (AssertionError) at datomic.core.datalog/unifying-vars (datalog.clj:899).
> Assert failed: All clauses in ‘or’ must use same set of vars, had [#{?e} #{?a}]
> (apply = uvs)#2022-04-1515:53nottmeyI guess, I need to use or-join?#2022-04-1515:56nottmeyyep, works#2022-04-1515:56nottmeythank you @U09R86PA4 🎉
that does not seem to be documented or did I miss it?#2022-04-1515:57favilaIt’s documented. It’s how you use a different data source for a rule#2022-04-1515:58favilaThe surprising thing here is that $ isn’t always the symbol for the rule’s data source if you don’t override{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-04-1515:59nottmeyAh found it, the overwrite part is “hidden” in the syntax description here:
https://docs.datomic.com/cloud/query/query-data-reference.html#using-rule#2022-04-1805:39hdenHow do I run effect range query on composite tuples?
For example, consider the domain of course registrations, modeled with the following entity types:
[{:db/ident :course/id
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
{:db/ident :course/campus
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :course/created-at
:db/doc "Timestamp stored as epoch millisec."
:db/valueType :db.type/long
:db/cardinality :db.cardinality/one}
{:db/ident :course/campus+created-at
:db/valueType :db.type/tuple
:db/tupleAttrs [:course/campus :course/created-at]
:db/cardinality :db.cardinality/one}]
Assuming there are lots of campus and courses, how do I effectively list all the courses created after a specific timestamp and is associated with a specific campus?
(The query have to be dynamically generated so raw-index access is not a option)
A
'[:find (pull ?course [...])
:in $ ?campus-id ?timestamp
:where
[(tuple campus-id timestamp) ?tuple]
[?course :course/campus+created-at ?index]
[(>= ?index ?tuple)]
[(untuple ?index) [?campus-id _]]]
B
'[:find (pull ?course [...])
:in $ ?campus-id ?timestamp
:where
[?course :course/campus+created-at ?index]
[(untuple ?index) [?campus-id ?created-at]]
[(>= ?created-at ?timestamp)]]
ref: https://docs.datomic.com/cloud/schema/schema-reference.html#composite-tuples#2022-04-1812:31favilaA. (I’m assuming you are using cloud and have a value index?) B cannot make use of an index#2022-04-1812:31favilaAlso, the real answer is “try both and see which is faster”. ;)#2022-04-1812:34hdenGot it. Thanks.#2022-04-1915:12PrashantI am evaluating On-Prem version of Datomic and have been getting below warning messages at Datomic transactors:
No Dead Letter Address configured for queue schematest2-3b27988c-4fca-4e59-aa47-9dcae9e9ad88.tx-result625e9e54-2d49-436e-95b6-b9de541636d5 in AddressSettings
No Expiry Address configured for queue schematest2-3b27988c-4fca-4e59-aa47-9dcae9e9ad88.tx-result625e9e54-2d49-436e-95b6-b9de541636d5 in AddressSettings
A few pointers on how to address these would be very helpful.
Are these ActiveMQ settings documented some where at https://docs.datomic.com/on-prem/?#2022-04-1915:30favilaI believe these are deliberate and you are not meant to address them. These are queues set up by datomic itself to communicate tx updates to peers (artemis is considered an implementation detail) to communicate with a peer and it didn’t configure any dead-letter queue for them because it doesn’t make sense to dead-letter these if the peer is gone or can’t be communicated with--the message is the transaction log, and the queue will get it back when it reconnects.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2022-04-1915:32PrashantThanks @U09R86PA4 :thumbsup:#2022-04-2002:59Drew Verleeare rhttps://docs.datomic.com/cloud/query/query-pull.html#reverse-lookups just for pull syntax?#2022-04-2003:00favilayes. (also entity maps)#2022-04-2003:01favilain datalog, you just reverse the e and v bindings{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-04-2003:04Drew Verleeyea. that makes complete sense and explains why I was very confused the first time i encountered the idea. I was trying to justify it in pure datalog...#2022-04-2003:04Drew Verleethank you for the explanation!#2022-04-2014:43uwoIf you're on-prem you can call a reverse-lookup keyword on the result of d/entity as well{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2022-04-2112:04augustlI believe reverse lookups work for datalog queries as well#2022-04-2112:06favila@U0MKRS1FX you cannot do this: [?e :foo/_bar ?v]. You must do this [?v :foo/bar ?e]#2022-04-2112:10augustlbrainfart confirmed, thought I did that in my code, but I don't 🙂#2022-04-2207:13plexusIt seems datomic peer still ships with presto 348 judging by the changelog. Are there concrete plans to upgrade and/or migrate to trino?#2022-04-2207:15plexusContext: we have built a BI solution for a customer based on datomic analytics and metabase. Metabase used to implement a custom presto connector (using http directly). In the latest release they have replaced that with a jdbc based connector, but in the process also migrated to trino, so we are currently held back from upgrading metabase.#2022-04-2212:48emccueWhat is trino and what is presto 348?#2022-04-2213:52favilaThis is about the datomic analytics product. Prestosql is a sql query engine; datomic has a connector for it so you can sql-query a datomic db. prestoSQL renamed itself to trino because of a competing fork of the engine called prestodb: https://trino.io/blog/2020/12/27/announcing-trino.html 348 is a version number of prestoSQL (released dec 14 2020) from before it’s rename to trino{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-04-2213:11Kris CI am trying to find reverse keys for an entity ("foreign" ref attributes that point to this entity). I have found the following "trick" via google:
=> (.touch ent)
=> (keys (.cache ent))
but it doesn't seem to work. Is there any other way to achieve that?#2022-04-2213:21favilaWhat do you mean “id doesn’t seem to work”?#2022-04-2213:21Kris CI do not get the reverse keys. Only the attributes of the entity#2022-04-2213:23Kris CAh, it was a typo "it doesn't seem to work"...#2022-04-2213:26favilaGet the entity db out using d/entity-db then query [_ ?attr ?e] or (d/datoms db :vaet e)#2022-04-2213:27Kris Cack, any idea why the "trick" is not working?#2022-04-2213:31favilaI think touch used to realize Eavt and Vaet, but now only does Eavt. This is a hack anyway: entity is designed for when you know your attributes in code already. D/touch is for dev-time printing of entity maps and such#2022-04-2213:32Kris Cah ok, thank you so much, @U09R86PA4!#2022-04-2215:14devn@jarrodctaylor saw https://max-datom.com/ on the front page of HN. Nice job!#2022-04-2215:40jarrodctaylorThus begins our march to Datomic world domination!!{:tag :div, :attrs {:class "message-reaction", :title "partyparrot"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "partyparrot", :src "https://emoji.slack-edge.com/T03RZGPFR/partyparrot/791392a030b747e0.gif"}, :content nil})} " 6")}
#2022-04-2217:47Elliot BlockApologies if this is a little nutty/premature, but if we have #clojuredart apps running natively on the desktop, is it a crazy idea to want to try to put a Datomic client into them so that apps can talk directly to DB / have direct access to the information model? I assume this is a ton of work, but curious if it makes sense as an architectural model in the first place. Curious if this could cut a lot of the intermediate infrastructure out of a frontend app. Any thoughts greatly appreciated!#2022-04-2217:59favilaThis is theoretically possible already with clojurescript, but no one has wanted it enough to write a clojurescript client api library, so I’m not sure dart changes that.#2022-04-2218:00favilaI would say this makes sense as a debugging tool or replacement for the (not very good) datomic console, but architecturally you very quickly need more layers to enforce policy, and then the direct connection stops making sense because you need to reduce its power in some way, or allow for interception and transformation.#2022-04-2218:02favilaSo I don’t see “datomic client in a dart desktop app” as a game changer; you will need an intermediate with more control before very long, and then you’re back to an intermediate framework or library (which there are already many good ones, e.g. fulcro or reframe)#2022-04-2218:11Elliot Blockyeah fascinating thank you!
I was kind of hoping perhaps something like Datomic database functions had evolved to be, for example a higher-level intermediary/domain model for transactions against the DB but it sounds like that’s not really it/there yet.
very interesting! will start looking into the intermediates, thanks!#2022-04-2218:11favilathe datomic cloud answer to that need is ions{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2022-04-2218:14favilaBy analogy with the SQL world, many sql dbs do have features which look like they might be enough to make them application platforms, ie authentication, stored procedures, and procedure+table/row/column-level access control#2022-04-2218:15favilabut I rarely see someone say, “lets just put a sql client in our desktop app”#2022-04-2218:16favilaso even if datomic did grow similar features, I’m not sure it would be a good or popular choice to use the datomic client api as the client application’s primary api to interact with data#2022-04-2218:18Elliot Blockright — there’s usually some kind of application-level API in between the client and the DB
it’s interesting to me because sometimes the application-API ends up being REST/RPC/GraphQL that looks basically almost just like the DB, but not the DB. Like it ends up being some kind of higher domain model, with auth, but not exactly the low-level data model, but related to it…#2022-04-2218:20favilayeah. but those “not exactly like” are what kill#2022-04-2218:21Elliot Blockhaha indeed =/#2022-04-2218:21favilaeven as a backend interface, using datomic directly is becoming a problem for us in some cases. sometimes we need to preserve an attribute with its meaning but not its implementation#2022-04-2218:22Elliot Blockokay so, and so therefore if one needs that anyway, might as well put that on a server and then have a client talk to that server#2022-04-2218:22Elliot Blockinteresting#2022-04-2218:23faviladatomic has a really great attribute-centric data model, but, it is still at the end of the day an implementation specification not a data model#2022-04-2218:24favilaI’m looking at pathom3 very seriously as something that has the attribute model of datomic but more flexibility around evaluation and implementation. And yes, the vast majority of attributes just pass through to datomic#2022-04-2219:46Elliot Blockyeah very cool — structurally pathom looks kinda like a federated GraphQL gateway (e.g. https://apollographql.com) except with a logic-programming/datalog/prolog-y query engine instead of a GraphQL-nested-map-join engine
Both those plus the new upcoming existence of the HTTP QUERY method make me think the API intermediary clients want is this “application-level set of API functions, but need to be able to express queries more sophisticated than ‘GET resource’”#2022-04-2315:39JohnJWhat does it mean to preserve an attribute with its meaning but not its implementation? to preserve the attribute meaning outside datomic?#2022-04-2315:42favilaTo avoid having to rewrite all the code that uses it#2022-04-2315:43JohnJah#2022-04-2315:44favilaConcrete examples: we needed for operational reasons to drop full text from some attributes. Datomic doesn’t let you do that: you need to make a new attr. At our data size this involves a migration (db is using two attrs at once for the same data for a time) #2022-04-2315:45favilaIt would have been really nice to hide all this from the code and let it keep using the same attr. It would have saved weeks of dev time#2022-04-2315:52favilaAnother example: datomic can (but really should not) store strings larger than a kB or two. The recommendation is to store a key to some other system. We end up with a hybrid encoding for latency where it’s in datomic if short enough. Now the same data is across two concrete attrs. Again, a migration was involved. Even worse, this introduces n+1 problems with the other store without ugly contortions.#2022-04-2315:54favilaAnother example: we store stat aggregates (eg counts of x for y) and have them available as attrs, but not be forced to have them be concrete attrs in datomic all the time#2022-04-2315:55favilaAll of these come down to: d/entity and d/pull have a fixed implementation that maps an attribute to a datomic attr, and if we want to use attrs as stable interfaces, we need some implementation flexibility that these don’t provide #2022-04-2316:08JohnJgot it thx, I guess this comes down to how much logic you want to keep in the db vs the app#2022-04-2316:08JohnJlike being at the mercy of the DB vs writing a bunch of application code#2022-04-2316:10favilaI don’t think that’s quite right. Being at the mercy of the db can mean (re)writing a bunch more application than you started with#2022-04-2316:11Elliot BlockAt the risk of taking the thread in a circle, does that mean it’s possibly a reasonable thing to want to put an abstract datalog interface in the client, whose persistent storage backend is an implementation detail? But there is an data-layer-interface with auth directly in the client?
(where the abstract interface looks datomic/datalog-like, but may or may not be directly implemented against datomic?)#2022-04-2316:12favilaSure, that’s a possibility#2022-04-2316:12JohnJ@U09R86PA4 yeah true#2022-04-2316:13JohnJbut it's true for most databases systems out there no?#2022-04-2316:14Elliot Block(okay awesome that line of thought is coming together, thank! Totally makes sense that the DB implementation itself is often useful to put behind an abstraction e.g. for auth, policy, facading instead of migrating, abstracting over sharding, etc.)#2022-04-2316:17Elliot Block(Reminds me of this old pattern from long ago: https://en.wikipedia.org/wiki/Data_access_object)#2022-04-2316:17favila@U01KZDMJ411 yeah that’s my point. Datomic is not magic. It’s implementation is fixed within certain boundaries, like any db.#2022-04-2316:18favilaattributes, pull exprs and datalog are great for data model expression, but d/pull, d/query d/entity are not data models but implementations of them that map to a datomic storage engine in a fixed way{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-04-2316:21favilaIt’s a so much lower friction abstraction with such great sympathy which how Clojure models data that it can be easier to make the mistake that the datomic attrs and the data model are exactly the same{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-04-2316:21favilaNo one would make that mistake with sql!{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-04-2316:22favilaAlso you can go a very long way before you hit a painful bit where you realize you need a little indirection#2022-04-2316:23favilaBut you’ve already written a bunch of “boundaryless” code by that time and backfilling the abstraction layer you need becomes hard#2022-04-2316:26JohnJyeah, the attractiveness of datomic with clojure is how you can keep using the same data model in both but you make a clear point how can the implementation limit the benefits of EAV flexibility#2022-04-2316:26Elliot BlockThis is probably horrifying but theoretically if the Datomic client interface supported either pre/post hooks / CLOS metaobject-style extension / interceptor middleware / multimethod-or-protocol dispatch, then you could keep the calling code the same but add general-or-casewise behavior modifications
Otherwise it seems like all the code needs to be written to an abstract interface/indirection just in case future behavior extension is needed, and otherwise it’s just an empty pass-through layer.#2022-04-2316:28favilaAgain I have high hopes for pathom in this respect#2022-04-2316:29JohnJdon't know much about pathom, but it would be something like having SQL Views?#2022-04-2316:29JohnJon top of EAV of course#2022-04-2316:32favilaSort of. It only an attribute/pull expr model (no datalog). You define “resolvers” which declare what they need as input attrs on an entity and what attrs they provide for that entity. You the. Query it by seeding with what data you have and a pull expr and you get a map of the same shape filled out with what you asked for#2022-04-2316:33JohnJFWIW, "boundaryless" is what keeps me using datomic for personal stuff, like "look at all the stuff I don't have to write" but can see how that can become a problem at scale{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-04-2316:35favilaIt also has a “foreign interface” where you get an entire query subtree extracted for you (eg, all the datomic attrs that map 1-1) and you just need to return a map in the right shape. This makes it really easy to have the “fall through” cases, and is also a handy way to avoid n+1 problems across process boundaries{:tag :div, :attrs {:class "message-reaction", :title "ok_hand"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👌")} " 1")}
#2022-04-2620:28JoeAInteresting, so the benefits of not having impedance mismatch are somewhat erased by the implementation?#2022-04-2620:34favilaI’m not sure I follow?#2022-04-2620:47JoeAMaybe I'm misunderstanding, but I'm assuming there's no impedance mismatch between clojure and datomic in the data model which makes it sound like everything is going to be smooth sailing but the database restrictions don't make it so#2022-04-2620:48JoeAwhen you say no one would make this mistake in SQL, is it because you are forced there to write some abstraction layer? to isolate the application layer from the DB.#2022-04-2620:51favilaSo in many cases you can represent your data model in datomic without much translation. The result of a d/pull is exactly what your domain models would have looked like.#2022-04-2620:51favilabut that is basically never true in SQL#2022-04-2620:52favilaso if one day your data model is not exactly like datomic, you probably didn’t write a layer of indirection in between your domain objects and your d/pulls already, so now you have to retrofit it in.#2022-04-2620:52favilabut in SQL world, the natural mode of expression in SQL is so different that you almost certainly have that layer built already#2022-04-2621:00JoeAunderstood, thx, do you still prefer datomic's data model despite the implementation lack of features / restrictions?#2022-04-2621:00favilaprefer it to what? sql?#2022-04-2621:01JoeAyes, to traditional RDBMS like postgres#2022-04-2621:01favilaoh god, a million times yes. not sure how you could have gotten another impression 🙂#2022-04-2621:02favilathe only thing I sometimes want from RBDMSes are specific operational characteristics#2022-04-2621:02favilaI never ever want its data model#2022-04-2621:03JoeAyeah, operational characteristics#2022-04-2621:04JoeAare important though, like the string limit and rigidity of attrs in datomic look jarring#2022-04-2621:05favilayeah something like TOAST for large values is a curious omission. But I don’t follow on the “rigidity of attrs”. In every way attrs seem more flexible than columns and tables#2022-04-2621:06JoeAI'm just checking out datomic, haven't use it in anger#2022-04-2621:06JoeAby rigidity of attrs I mean the implementation not the data model, like you can't disable fulltext#2022-04-2621:07favilayeah, fulltext is also a bit of a quiet curse.#2022-04-2621:07JoeAwhich was you alluded to before#2022-04-2621:10favilayep. and if datomic could do these things those sources of indirection-need would have been gone. but there’s still stuff like maintaining aggregates, maintaining computed/derived values (materialized or not), etc, that I’m not sure I can reasonably ask datomic to take care of.#2022-04-2621:11favilaThere’s also an inherent cost to keeping all transacted data--some stuff really is just ephemeral and high-volume and storing it in datomic forever becomes a chore, and it’s a shame you need to give up the attribute model to do it.#2022-04-2621:11favilathese all speak to an occasional need for some indirection without faulting datomic for being the kind of db it is and not another one.#2022-04-2621:12favilaand there are dbs that support an attribute model but are quite different from datomic: datalevin, xtdb, and many flavors of datascript storage backend#2022-04-2621:14JoeAyeah, had a little look at them, xtdb looks more like a document store, different data model than datomic, the others one don't look to serious/ready for production use, but can't tell#2022-04-2621:15JoeAdatomic is a though choice, there's the operational overhead also (more processes)#2022-04-2621:17JoeAI guess for anything serious, like public webapps, a moment will come when you just have to run another DB too, besides datomic#2022-04-2621:18JoeAso maybe just having to deal with tables doesn't start to look that bad#2022-04-2621:18favilaBah, I disagree. tables never again#2022-04-2621:19JoeA😉#2022-04-2621:19favilaFWIW at Shortcut we use one datomic database as our primary store with dynamo as the backing store, and additional dynamo tables and s3 for stuff that isn’t appropriate for datomic (high write volume, ephemeral data, large blobs)#2022-04-2621:20JoeAsounds good, less devops headache but maybe too pricey?#2022-04-2621:21favilaand I love the peer model--scaling query load with peers is way easier than admistering a cluster#2022-04-2621:21JoeAso no clients?#2022-04-2621:22favilawe have a peer-server around, but we don’t use it for sustained load. Again, it’s an indirection problem: d/entity and d/pull can’t transparently be replaced for peer vs client#2022-04-2621:22favilawith something like pathom in the middle, it could#2022-04-2621:23favilaour biggest headache honestly is dynamodb + memcached. We have serious envy for the cloud’s 3-tier storage and wish on-prem had it too#2022-04-2621:23JoeAgotcha. Curious if you can share, a nubank article scared me saying they run more than 2400 transactors, shorcuts looks really cool, how many transactors are running?#2022-04-2621:23favilajust one#2022-04-2621:24JoeAoh impressive, so read heavy workload#2022-04-2621:24favilayes; that’s definitely what datomic is for#2022-04-2621:25JoeAWhat's the issue with dynamodb? setting the correct read/writes?#2022-04-2621:25favilait has really high latency variance, and is expensive#2022-04-2621:26JoeAI would have thought that something like shortcut would be write heavy#2022-04-2621:26JoeAbut no idea really#2022-04-2621:27JoeAso dynamo's claim everything is done in single-digit milliseconds is not true?#2022-04-2621:28favilahah, no. to be fair, datomic is using dynamo as a blob store. for typical item sizes people use dynamo for (a few kb at most), dynamo may indeed have better variance.#2022-04-2621:29favilabut there are plenty of products that are dynamo/cassandra-like and exist pretty much only to guarantee lower latency and variance, e.g. https://www.scylladb.com/#2022-04-2621:29JoeAOk, have you tested SQL storage on a fast disk as options for shortcut?#2022-04-2621:30favilanot for shortcut, but I’ve used mysql as the storage on moderately sized datomic dbs in the past (3+ years ago). It was fine#2022-04-2621:30JoeAinteresting, can datomic be made to work with those? scylladb for example#2022-04-2621:31favilaI don’t think so because you need to use their client#2022-04-2621:31faviladatomic uses the aws client directly#2022-04-2621:31favilamaybe it’s wire-compatible and there’s some way to make that work#2022-04-2621:32favilaeither with the dynamo or the cassandra backend#2022-04-2621:33JoeAcool (about mysql), I have setup datomic with postgres for now, since it's a single table, I'm wondering if datomic can really max out postgres, I guess it would require a lot of peers#2022-04-2621:34JoeAanyway, if shortcut can run with one transactor which is impressive, then I think I'm going to be ok 😉#2022-04-2621:37JoeAdoes any SQL storage should work?#2022-04-2621:39JoeAthe docs indicate it should, curious how they abstract that, it uses some lowest common denominator standard SQL?#2022-04-2621:39favilawell, it has sql to build the tables#2022-04-2621:39favilaagain, it’s used as a key-value blob store#2022-04-2621:40favilaany sql that can store moderately sized binary blobs efficiently will do (a few kb to <2mb)#2022-04-2621:40JoeAyeah, but the queries#2022-04-2621:40favilawhat queries?#2022-04-2621:40favilaselect, insert, update, delete#2022-04-2621:40favilathat’s it#2022-04-2621:42JoeAyeah those, pretty basic, I guess the SQL dialect of those don't change between DBs for the very basics#2022-04-2621:42favilayeah they are very simple sql statements. I’ve used it with sqlite without issue#2022-04-2621:43favilano joins#2022-04-2621:44JoeAoh neat, sqlite, one machine, less processes, is the java driver solid? It feels like the java world favors stuff java based like h2 more than sqlite#2022-04-2621:45JoeAh2 included in datomic is very old#2022-04-2621:46favilaI’ve used the xerial driver, it’s fine https://github.com/xerial/sqlite-jdbc#2022-04-2621:46favilait won’t be network-addressible like h2 though#2022-04-2621:47favilaso the peers need to be on the same instance. That’s fine for bulk workloads but not much else#2022-04-2621:47JoeAgotcha, the transactor does run embedded correct?#2022-04-2621:48favilano? the transactor and peer are always separate processes. You just won’t have an extra storage process#2022-04-2621:48JoeAI mean, the transactor uses h2 in embedded mode#2022-04-2621:49JoeAthere's is something scary about h2, https://www.h2database.com/html/features.html#connection_modes#2022-04-2621:49JoeAIn embedded mode I/O operations can be performed by application's threads that execute a SQL command. The application may not interrupt these threads, it can lead to database corruption, because JVM closes I/O handle during thread interruption.#2022-04-2621:50JoeAdo you know if datomic handles that?#2022-04-2621:51favilaprobably? h2 is only used by dev storage, which is special because the transactor itself exposes an additional port as the storage port (I believe using sql). And you won’t use dev in production anyway.#2022-04-2621:51favilapeers do not access the h2 file directly#2022-04-2621:52favilait may use server mode honestly#2022-04-2621:53favilabecause it also exposes the console on yet another port#2022-04-2621:54JoeAif it uses server mode only and not mixed mode, a h2 process should be visible no?#2022-04-2621:55JoeAbut only see the transactor and peer process#2022-04-2621:56favilait’s probably mixed mode then#2022-04-2621:56favilathis sounds like it: http://www.h2database.com/html/features.html#auto_mixed_mode#2022-04-2621:56JoeAWas thinking that for light load it might be ok but the h2 version is very old, could try upgrading or use sqlite#2022-04-2621:57JoeAyeah#2022-04-2622:00JoeAanyway, thx for the chat and insights#2022-04-2609:13lambdamHello,
I declared an ident as a 2-elements unique tuple as so:
{:db/ident :foo/uname+uid
:db/valueType :db.type/tuple
:db/tupleAttrs [:foo/uname :foo/uid]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
Then, when I use it as a lookup ref in a pull call, it works:
(d/pull db '[*] [:foo/uname+uid ["domain" "123456"]])
But when I use it in a transaction:
@(d/transact conn
[{:db/id 987654321
:bar/link [:foo/uname+uid ["domain" "123456"]]}])
I get the following error:
...
1. Caused by datomic.impl.Exceptions$IllegalArgumentExceptionInfo
:db.error/not-a-keyword Cannot interpret as a keyword: domain, no leading :
{:cognitect.anomalies/category :cognitect.anomalies/incorrect,
:cognitect.anomalies/message
"Cannot interpret as a keyword: domain, no leading :",
:db/error :db.error/not-a-keyword}
...
Does someone understand the meaning and/or the reason of this error?
Thanks a lot#2022-04-2612:40favilaDoes this work? [[:db/add 987654321 :bar/link [:foo/uname+uid ["domain" "123456"]]]]. I suspect it’s just ambiguity in the map desugaring--it’s trying to interpret it as a list of entity refs instead of one large entity ref. That map->attribute stuff is not aware of what attributes mean.#2022-04-2712:37lambdamHello,
Yes it works. Thanks.
Nonetheless, it works with regular lookup refs, like [:foo/internal-name "plop"] , it works well.
I tried to change the type of :foo/uname from string to keyword, and now am getting the following error:
...
Caused by datomic.impl.Exceptions$IllegalArgumentExceptionInfo
:db.error/not-an-entity Unable to resolve entity: :domain
{:cognitect.anomalies/category :cognitect.anomalies/incorrect,
:cognitect.anomalies/message "Unable to resolve entity: :domain",
:entity :domain,
:db/error :db.error/not-an-entity}
It indeed seems to interpret the vector as a list of entities. But I don't understand why it does. Do you think that it is an inherent ambiguity coming from Datomic well defined semantics or a bug of misinterpretation ?#2022-04-2712:54favilaThe map syntax is syntax sugar and the syntax is inherently ambiguous. Also on-prem has this “auto-keywordization” feature where you can put a string where a keyword is expected (e.g. “:foo”) and it will coerce to a keyword. (This was added I think so using datomic from java is easier. Cloud doesn’t have it. It makes things worse here.) So what should {:foo [:bar ["box" "baz"]]} mean? Is it [:db/add ID :foo :bar][:db/add ID foo [:box "baz"]]? (the only possible interpretation before tuples were added.) Or is it [:db/add ID :foo [:bar ["box" "baz"]]?#2022-04-2712:55favilaFor backward compatibility, I think it has to be [:db/add ID :foo :bar][:db/add ID :foo [:box "baz"]]#2022-04-2712:59favilaAnd your puzzling stacktrace is because [:db/add ID :foo [:box "baz"]] doesn’t make sense. It tried to turn the “domain” in ["domain" "123456"] into a keyword via auto-keywordization and couldn’t.#2022-04-2713:10lambdamBut since idents have types, the [:db/add ID :foo :bar][:db/add ID foo [:box "baz"]] case couldn't be possible since :foo would have a :db/valueType :db.type/keyword type in the first assertion and a :db/valueType :db.type/ref type in the second.
So the only valid case would be [:db/add ID :foo [:bar ["box" "baz"]] after tuples introduction and error before, no ... ?#2022-04-2713:11favilathe desugaring is purely syntatic#2022-04-2713:11favilait doesn’t have a DB to introspect types#2022-04-2713:12lambdamAh ok, I see.#2022-04-2713:13favilaalso, [:db/add ID :foo :bar][:db/add ID :foo [:box "baz"]] is indeed possible if :foo is a ref#2022-04-2713:13favila:bar is a valid entity ref#2022-04-2713:13favilaso is [:box "baz"]#2022-04-2713:13favilaso is 12345#2022-04-2713:14favilaand in assertion contexts, so is -123 (temp id) or #db/id{:part :some-partition-kw :idx -1} (a tempid record object)#2022-04-2713:14lambdamA keyword can be a ref? I didn't know.
In which case?#2022-04-2713:14favilaso even if you used types, it wouldn’t help much#2022-04-2713:15favila:db/ident establishes a keyword as a ref#2022-04-2713:15favilathat’s how attribute lookup works#2022-04-2713:15lambdamAh yes. I see.#2022-04-2713:15favilahttps://docs.datomic.com/on-prem/schema/identity.html#entity-identifiers#2022-04-2713:16lambdamThanks a lot for all the information.
I solved my problem with a little function:
(defn entity->attr-list [entity]
(let [id (or (:db/id entity)
(d/tempid :db.part/user))]
(->> (dissoc entity :db/id)
(mapv (fn [[key value]]
[:db/add id key value])))))#2022-04-2713:45lambdamHumm may be not so simple for idents with "many" cardinality...#2022-04-2713:46favilayes, nor nested maps#2022-04-2713:46favilaor reverse refs#2022-04-2713:47favilaif you are really attached to the map syntax except for this one handling of tuple values, consider annotating the tuple ref with metadata and expanding only those map entries via a postwalk#2022-04-2713:49lambdamThat's a lot of complexity for the map syntax. I'll change my code to explicitly format transaction statements without ambiguity. The standard way.#2022-04-2713:49favilathat’s what I recommend#2022-04-2713:49lambdam👌 Thanks#2022-04-2713:50favilasugar gives you cavities#2022-04-2713:51lambdam😂 I see.
Too much sugar at least.#2022-04-2811:49Benjaminwhat is the defaut way to implement 'reactiveness'? Do I have a loop that checks the current t of the database, then if it goes up it calls some handlers, possibly with the history since the last t ?#2022-04-2812:26Joe LaneWhat are you trying to do?#2022-04-2812:58Linus Ericssonin on-prem - either use the transaction log queue, or be clever when creating the initial data#2022-04-2813:06Ivar RefsdalI think we need more context / details for giving a good answer.
If what you are looking for is a persistent queue, you might want to try https://github.com/ivarref/yoltq, which it seems you already have starred 😃.
yoltq uses https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/tx-report-queue to achieve "reactiveness". You may want to check that out as well. This is on-prem only.#2022-04-2813:55BenjaminI currently work on a system that does not use datomic. What we do is querying aws redshift to make slack messages when something of interest happened in the system. I was just wondering how I would do it if it was datomic. 😅 I was thinking along the lines of there is a query group that reacts to new data in the db and then for example makes slack messages.
-> now that I wrote it down I'm thinking that cloudwatch metrics and alarms might cover a lot of the use case{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2022-04-2817:09Ivar RefsdalI did some error/recovery reporting in yoltq:
If there is 3 consecutive error polls, i.e. errors in the database, then an error-callback will be invoked, triggering e.g. a slack message (or logging to ERROR as is the default).
If the error persists, it will wait 1 hour before invoking the error-callback, so slack/logs won't be flooded.
This isn't specific to Datomic though.
Here is that code:
https://github.com/ivarref/yoltq/blob/main/src/com/github/ivarref/yoltq/error_poller.clj
Maybe too fancy/complex, I don't know.. 🤷{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2022-04-2823:57kennyfyi the in I4i instances seem useful for Datomic Cloud: https://aws.amazon.com/blogs/aws/new-storage-optimized-amazon-ec2-instances-i4i-powered-by-intel-xeon-scalable-ice-lake-processors/#2022-04-2913:16dazldis there any way to say “add n to a value” multiple times in a transaction, without having to coordinate the real values?#2022-04-2913:46favilaNo, you have to prepare your tx data differently. Transactions are run and applied atomically: there’s no db to read which ever had only some of your updates but not others#2022-04-2913:46dazldgot it, thought it might be like that#2022-04-2913:47dazldi’ll turn it around, and have multiple data supplied to the tx fn, and do the calculation there#2022-04-2913:17dazldie, if I say that [[:db/add "foo" :some/metric 1] [:db/add "foo" :some/metric 2]...] - i clearly get conflicts. what’s the solution?#2022-04-2913:18dazldif i use a tx fn, it’s the same problem, as it expands to the same tx data - unless I’m missing something?#2022-04-2913:45Linus Ericssonno, you have to do this addition when preparing the tx-data. but consider the possibility to do several transactions as well, they can sometimes be batched.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-04-2913:47dazldexactly - it’s all ok when calling 1 fn per tx, but in a batch it’s not happy#2022-04-2913:48dazldbut - think i see a way out.#2022-04-2914:03ennwhen I’ve wanted to do something similar I ended up doing a little processing on the accumulated tx-data before actually transacting, to coalesce accretive changes like this.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2022-04-3010:44dazld@U060QM7AA in the transactor, or the peer? I’m thinking a tx function that can take multiple items would be the simplest - but this condition that each transaction can only have one of these calls annoys.#2022-04-3011:48enn@U3ZUC5M0R I just did it in the peer.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2022-04-3002:21jasonjcknThe project i’m working on is creating a knowledge graph of sorts, we’re pulling in data from variety of disparate sources, typically deeply nested JSON data, and it’s not immediately clear to me what the schema needs to be ahead-of-time, until we see more use cases, we only have a vague sense of which parts of the JSON will end up being useful, is this a good fit for datomic? I tend to think of schemaless databases as the ‘solution’ here, where retrospectively you can decide on what your indexes will be, speed up particular queries , as you acquire more requirements/use cases on the sorts of queries that matter.
If not datomic, what would you suggest, also needs to support graph query operations.
If datomic, the strategy is to break apart these deeply nested JSON structures at ingest time try and mapping to datomic primitives, e.g. json arrays into db.cardinality/many or how would you go about that?#2022-04-3015:37kenny“Schemaless” just pushes the schema requirement to the read side.
Dynamically generated schema is a rocky road. You need to really trust the data set to do such a thing. It sounds like you would be better off with a different database. #2022-05-0211:22Linus EricssonPostgres and other databases has better support for plain JSON-documents and other things. Still Datomic could link these documents and you could batch read and upsert your datomic schema at your own pace.#2022-05-0211:23Linus EricssonIt is certainly possible to store serialized datastructures as byte arrays in Datomic, but they are no more than just byte arrays. Also, Datomic sometimes store data multiple times.#2022-05-0216:07uwoYou might consider
https://github.com/quoll/asami#2022-04-3021:33vlad_pohhow do i pass multiple rules in a single query?
(d/q '[:find ?t (count ?t2) (count ?t3)
:in $ % ?p1 ?p2
:where
(played2 ?p1 ?p2 ?t)
(wins ?p1 ?p2 ?t2)
(wins ?p2 ?p1 ?t3)
[(= ?t ?t2 ?t3)]]
atp wins played2 "Roger Federer" "Novak Djokovic" )
expected: [$ % ?p1 ?p2], got: 5#2022-04-3022:03favilaI don’t understand your expected/got. But the rule argument is just a vector of rules, so just add more rules to the vector to pass more rules#2022-04-3022:04favila(into wins played2) probably{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-05-0114:51vlad_pohtrying datomic free for the first time and running into a problem i don't understand
why would the following fail
(def played
'[[(played ?p1 ?p2 ?t)
[?e :winner_name ?p1]
[?e :loser_name ?p2]
[?e :tourney_name ?t]]
[(played ?p2 ?p1 ?t)
[?e :winner_name ?p1]
[?e :loser_name ?p2]
[?e :tourney_name ?t]]])
with the following error
{:type java.lang.Exception,
:message
"processing rule: (q__355 ?t), message: processing clause: (played ?c__344 ?c__345 ?t), message: processing rule: [played ?p1 ?p2 ?t], message: processing clause: [?e :winner_name ?p1], message: :db.error/not-an-entity Unable to resolve entity: :winner_name",
:at [datomic.datalog$eval_rule$fn__6648 invoke "datalog.clj" 1459]}
{:type java.lang.Exception,
:message
"processing clause: (played ?c__344 ?c__345 ?t), message: processing rule: [played ?p1 ?p2 ?t], message: processing clause: [?e :winner_name ?p1], message: :db.error/not-an-entity Unable to resolve entity: :winner_name",
:at
[datomic.datalog$eval_clause$fn__6622 invoke "datalog.clj" 1405]}
#2022-05-0118:18nottmeyit sounds like :winner_name is not registered as attribute{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 1")}
#2022-05-0208:27cl_jhi anybody knows how to properly use mount state in datomic ion lambda? I got the error class mount.core.DerefableState cannot be cast to class java.lang.String (mount.core.DerefableState is in unnamed module of loader clojure.lang.DynamicClassLoader @5943daef; java.lang.String is in module java.base of loader 'bootstrap') after deployed to aws, but works correctly in local repl.#2022-05-0208:34magnarsSounds like you haven't started your mount states.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-05-0208:37cl_jtried to deploy several times, sometimes it works, sometimes it doesn't. i am not familiar with mount + ion lambda. do we need to manually start the states? i thought requiring the states is enough#2022-05-0208:38magnarsYou have to run mount.core/start#2022-05-0208:40cl_jThanks @U07FCNURX! I am wondering why sometimes it works#2022-05-0208:40magnarsAgreed!#2022-05-0208:59cl_jis it a bad idea to just call (mount.core/start) , or should i pick the required states to start?#2022-05-0209:02magnarsI think it is reasonable to start all the mount states as a default.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-05-0219:43vijaykiranI guess someone should fix the git conflicts properly 🙂{:tag :div, :attrs {:class "message-reaction", :title "grin"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😁")} " 4")}
#2022-05-0309:09steveb8nQ: anyone else seeing logs like this from CI cloud deploys?
Downloading: com/cognitect/http-endpoint/1.0.101/http-endpoint-1.0.101.jar from
2022-05-03T09:07:12.529Z 93e41cd3-eb38-4d02-8ac3-ee5adda280b8-48jps WARN [com.amazonaws.util.EC2MetadataUtils:414] - Unable to retrieve the requested metadata (/latest/dynamic/instance-identity/document). Failed to connect to service endpoint:#2022-05-0314:43Daniel JompheLast night, we saw this error in a deploy CI DB migration operation:
Execution error (ExceptionInfo) at datomic.client.impl.cloud/get-s3-auth-path (cloud.clj:179).
Unable to connect to https://<snipped>.
but retrying it this morning, it passed.#2022-05-0314:43Daniel JompheLast night, we saw this error in a deploy CI DB migration operation:
Execution error (ExceptionInfo) at datomic.client.impl.cloud/get-s3-auth-path (cloud.clj:179).
Unable to connect to https://<snipped>.
but retrying it this morning, it passed.#2022-05-0314:51colinkahnCross linking this since it's more relevant for this channel - https://clojurians.slack.com/archives/C03S1KBA2/p1651588729373049?thread_ts=1651588729.373049&cid=C03S1KBA2#2022-05-0410:55biscuitpantsis there any documentation about Datomic excise segments? its a metric that we are seeing in Datadog, but i cannot find any documentation about what exactly they are#2022-05-0416:40jdkealyis anyone doing on-prem with kubernetes ?#2022-05-0416:42ghadi@jdkealy nubank is#2022-05-0416:43jdkealyThat's good to know! It would simplify my life SOOO much to have the transactor in my k8s#2022-05-0416:43ghadiare you on AWS?#2022-05-0416:43jdkealyyes#2022-05-0416:43ghadistorage in DDB + transactor in K8S is what we do#2022-05-0416:44ghadiworks great. try to minimize pod disruption though (spot instances, etc.)#2022-05-0416:44jdkealyGreat! So what about the transactor address ?#2022-05-0416:44jdkealyDo you use an IP or a load balancer ?#2022-05-0416:44ghadiI'm not sure.#2022-05-0416:45jdkealyok i'll give it a whirl#2022-05-0420:18jdkealySo i was able to start the transactor in a kubernetes pod. And in my cluster, my services has a service address of datomic . I made an alias in my transactor pod datomic = localhost
But when i go to start the db from clojure it says
starting database connection
CREATE DB
Error communicating with HOST datomic on PORT 4334
#2022-05-0420:20jdkealythis tells me that datomic was able to connect to dynamo, it wrote its location to storage datomic the transactor is up and running.#2022-05-0420:21jdkealyMy kubernetes service looks like
resource "kubernetes_service" "datomic" {
metadata {
name = "datomic"
}
spec {
selector = {
App = kubernetes_deployment.react.spec.0.template.0.metadata[0].
}
port {
port = 4334
target_port = 4334
}
type = "LoadBalancer"
}
}#2022-05-0420:23jdkealy{:alt-host nil, :peer-version 2, :password "<redacted>", :username "<redacted>", :port 4334, :host "datomic", :version "0.9.6045", :timestamp 1651695721103, :encrypt-channel true}#2022-05-0420:23jdkealyso this also means my peer is able to connect to dynamo#2022-05-0420:25jdkealy{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "50223f3f2410333c3a7d656533673468676732677d336467333e"}, :content ("[email protected]")}#2022-05-0420:25jdkealyport 80 hangs#2022-05-0420:26Joe LaneWhich version of java are you using?#2022-05-0420:26jdkealyin my peer ?#2022-05-0420:26Joe Lane#2022-05-0420:26jdkealy{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "04766b6b704467686e2931316733603c3333663329673033676a"}, :content ("[email protected]")}#2022-05-0420:26jdkealy^ peer#2022-05-0420:27jdkealybash-4.3# java -version
openjdk version "1.8.0_92-internal"
OpenJDK Runtime Environment (build 1.8.0_92-internal-alpine-r1-b14)
OpenJDK 64-Bit Server VM (build 25.92-b14, mixed mode)
bash-4.3#
#2022-05-0420:27jdkealytransactor#2022-05-0420:29jdkealywhat versions should i be using ?#2022-05-0420:29Joe LaneThe version of datomic you're using was released in https://docs.datomic.com/on-prem/changes.html#0.9.6045, before JDK17 was ever released.
The latest release, https://docs.datomic.com/on-prem/changes.html#1.0.6397 , added support for JDK17#2022-05-0420:29jdkealyoh ok#2022-05-0420:30Joe LaneHope that helps @jdkealy!#2022-05-0420:30jdkealywhich version of java should i be using ?#2022-05-0420:30Joe Lane17 works great w/ the latest release, otherwise, pick 11#2022-05-0420:30jdkealyok!#2022-05-0420:31Joe LaneLatest release also still supports 8#2022-05-0420:32Joe LaneBut you should really pick a newer jdk than 8, so much has improved.#2022-05-0420:32jdkealyok#2022-05-0420:37jdkealygoing to rebuild the image and give it a go!#2022-05-0420:37jdkealyi'm surprised there's not more docs on using a transactor in k8s#2022-05-0420:51jdkealy{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "7a0815150e3a1e1b0e15171319571c1c4343424d431c1e5719094d4e0d"}, :content ("[email protected]")}#2022-05-0420:52jdkealynow the transactor fails with
Terminating process - Serve failed
ActiveMQNotConnectedException[errorType=NOT_CONNECTED message=AMQ119007: Cannot connect to server(s). Tried with all available servers.]
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:787)
at datomic.artemis_client$create_session_factory.invokeStatic(artemis_client.clj:114)
at datomic.artemis_client$create_session_factory.invoke(artemis_client.clj:104)
at datomic.update$create_master$fn__11961.invoke(update.clj:732)
at datomic.update$create_master.invokeStatic(update.clj:722)
#2022-05-0420:56jdkealyi think maybe the old java doesn't honor host aliases#2022-05-0421:25jdkealy@U0CJ19XAM do you think there could be an issue using a load balancer to reach port 4334 ?#2022-05-0421:26Joe LaneI'm not a Kubernetes expert, and I would be suspicious of a load balancer#2022-05-0421:28Joe LaneI also would be suspicious of "aliases"#2022-05-0421:30jdkealyok#2022-05-0423:58jdkealyok, i'm using the IP address, exposed 4334,4335,4336, no luck#2022-05-0500:54jdkealygood god... after some debugging, i realized i had been pointing my kubernetes service to the wrong kubernetes pod :man-facepalming:#2022-05-0500:56Joe LaneWhat happened once you pointed it at the right pod?#2022-05-0500:57jdkealyit just connected fine... I'm using a nodeport and using the service address#2022-05-0500:57jdkealyso the host is just "datomic"#2022-05-0500:58Joe Lane🥳#2022-05-0500:58jdkealyfor hours i was getting "can't connect to 4334"#2022-05-0500:59jdkealyso then i fired up the datomic console on 8080 and when i couldn't connect to that either i got suspiscious#2022-05-0500:59jdkealyand then i knew something was wrong when i couldn't connect to python -m simpeHTTPServer#2022-05-0501:00jdkealystared at it for 30 mins and FML. I was pointing at a Deployment that didn't have anything running on any of those ports#2022-05-0501:02Joe LaneI had something similar a few months ago where I lost a whole day (and some hair) because I forgot that core.async/onto-chan!! closes the channel by default and I didn't set close? to false .#2022-05-0501:03Joe LaneI know you've been looking into this for a while now (not just today), I'm glad you got it sorted out.#2022-05-0501:05jdkealyyes thanks! I'll write do a writeup on it, since there's nothing on the web it seems.
Since we've got like 5 environemnts, i'd hate to spin up 5 ec2 servers. What a relief!#2022-05-0514:26zalkyHey all: is there an easy way from the repl to see what version of the datomic peer is running?#2022-05-0515:15Ivar RefsdalIs that an on-prem in-process peer library?#2022-05-0516:03zalky@UGJE0MM0W, yes, on-prem in process.#2022-05-0517:54Ivar RefsdalHow about:
(some->> (io/resource "META-INF/maven/com.datomic/datomic-pro/pom.xml")
(slurp)
(str/split-lines)
(filter #(str/includes? % "<version>"))
(first))
=> " <version>1.0.6397</version>"
yeah, that's pretty ugly...#2022-05-0517:55zalkyBrilliant, worked like a charm, thanks!#2022-05-0518:00zalkyI was just experimenting on some deps aliases and wanted to confirm that the version of datomic I expected to load was in fact the one that was being loaded.#2022-05-0518:06Ivar RefsdalRight :thumbsup:
Sometimes there is also pom.properties available.
If I need something like this I just unzip -p the-jar and grep for pom
https://github.com/metosin/jsonista/issues/22#2022-05-0518:07Ivar Refsdal(and then I know what file to look for)#2022-05-0515:47Ivar RefsdalFor on-prem version 1.0.6397 (latest) it seems that db/cas does not resolve tempids that are strings, only explicit (datomic.api/tempid ..).
Is this a bug? Could it be solved? Should I report it to support?
Reproduced in https://gist.github.com/ivarref/98537d7393d4141fb4dfb2a213756404.#2022-05-0516:07favilaThe root cause I think is that d/entid works for record tempids (resolves to a negative number, which is the lowest-level representation), but not for string tempids. It’s not clear how it could because strings don’t have enough info in them.#2022-05-0517:45Ivar RefsdalThanks, but I didn't quite understand your answer..
It works (as expected) to transact:
[[:db/add "ent" :e/version 1]
{:db/id "ent" :e/id "a" :e/info "a"}]
But
[[:db/cas "ent" :e/version nil 1]
{:db/id "ent" :e/id "a" :e/info "a"}]
fails. I fail to see why :db/cas shouldn't work, but :db/add works.#2022-05-0517:48Ivar RefsdalHere is another case that fails, but shouldn't:
(deftest nil-test
(let [tempid (d/tempid :db.part/user)
{:keys [db-after]} @(d/transact *conn* [[:db/cas tempid :e/version nil 1]
{:db/id tempid :e/id "a" :e/info "1"}])]
(is (= #:e{:id "a" :version 1 :info "1"} (d/pull db-after [:e/id :e/version :e/info] [:e/id "a"])))
(let [tempid (d/tempid :db.part/user)
; The following transaction success, though it shouldn't:
{:keys [db-after]} @(d/transact *conn* [[:db/cas tempid :e/version nil 2]
{:db/id tempid :e/id "a" :e/info "2"}])]
(is (= #:e{:id "a" :version 2 :info "2"} (d/pull db-after [:e/id :e/version :e/info] [:e/id "a"]))))))
For me it appears that the tempid resolving for :db/cas is broken#2022-05-0517:49Ivar Refsdalactually, it doesn't fails, it just fails to assert that :e/version does not exist.#2022-05-0517:55favila:db/add doesn’t have to read anything; :db/cas has to read#2022-05-0517:56favilaSo when it gets a tempid, it does not know yet what it resolves to, so it can’t make a read#2022-05-0517:57Ivar RefsdalShouldn't it complain / throw an exception if it is an invalid state/operation?#2022-05-0517:59favilaI think cas should reject any tempid#2022-05-0517:59favilabut, the reason it doesn’t for tempids is because d/entid resolves to a long, which works for entity lookup (it just won’t find any data).#2022-05-0518:00favilaTry it: (d/entity db the-tempid)#2022-05-0518:00favilaor (d/entid db the-tempid)#2022-05-0518:00faviladoesn’t work for strings#2022-05-0518:03Ivar RefsdalHm, well cas does accept tempids#2022-05-0518:03Ivar RefsdalHow would one assert that an upsertable entity does not have an attribute set?#2022-05-0518:20Ivar RefsdalAs far as I understand it:
db/cas accepts d/tempid as entity and nil as old value.
This will cause it to write the new value, regardless if the attribute was already set.
For any non-nil old value db/cas with d/tempid as entity will throw an exception.
This is a bug, no?#2022-05-0518:31Ivar RefsdalI added the following test to the gist linked above:
(deftest this-should-throw-but-does-not
@(d/transact *conn* [{:e/id "a" :e/version 1}])
(let [tempid (d/tempid :db.part/user)
{:keys [db-after]} @(d/transact *conn* [[:db/cas tempid :e/version nil 2]
{:db/id tempid :e/id "a" :e/info "a"}])]
; :e/version should not be 2, but it is:
(is (= 2 (:e/version (d/pull db-after [:e/version] [:e/id "a"]))))))#2022-05-0518:32Ivar RefsdalI am off for the evening.
Thanks for your replies and time Favila 🙂#2022-05-0518:33favila> How would one assert that an upsertable entity does not have an attribute set?
You need to parameterize the cas entity by the lookup ref for the upsert. So you can’t use stock :db/cas because it fails if it can’t resolve a lookup ref{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-05-0518:33favilaso you need some cas-like thing that knows both the tempid and what it should resolve to#2022-05-0518:34Ivar RefsdalHm, thanks, that makes sense. I will have a look at it tomorrow#2022-05-0608:40Ivar RefsdalAs of now I have some code like this:
(defn cas-inner [db e-or-lookup-ref a old-val new-val]
(cond
(string? e-or-lookup-ref)
(d/cancel {:cognitect.anomalies/category :cognitect.anomalies/incorrect
:cognitect.anomalies/message "Entity cannot be string"})
(instance? DbId e-or-lookup-ref)
(d/cancel {:cognitect.anomalies/category :cognitect.anomalies/incorrect
:cognitect.anomalies/message "Entity cannot be tempid/datomic.db.DbId"})
(and (vector? e-or-lookup-ref)
(= 4 (count e-or-lookup-ref))
(keyword? (first e-or-lookup-ref))
(= :as (nth e-or-lookup-ref 2))
(string? (last e-or-lookup-ref))
(is-identity? db (first e-or-lookup-ref)))
(let [e (vec (take 2 e-or-lookup-ref))]
(cond
(some? (:db/id (d/pull db [:db/id] e)))
[[:db/cas e a old-val new-val]]
(nil? old-val)
[[:db/add (last e-or-lookup-ref) a new-val]]
:else
(d/cancel {:cognitect.anomalies/category :cognitect.anomalies/incorrect
:cognitect.anomalies/message "Old-val must be nil for new entities"})))
:else
(d/cancel {:cognitect.anomalies/category :cognitect.anomalies/incorrect
:cognitect.anomalies/message "Unhandled state"})))
which means you can write transactions like this:
[[:ndt/cas [:e/id "a" :as "tempid"] :e/version nil 1]
{:db/id "tempid" :e/id "a" :e/info "1"}]
Seems to work well enough.
I'm also adding support for resolving pure strings (to e.g. [:e/id "a" :as "tempid"]) before running that function in the transactor...#2022-05-0608:43Ivar RefsdalGiven that cas-inner is executed by the transactor, all functions, both :ndt/cas and :db/cas, will operate on exactly the same datadatabase, right?#2022-05-0611:57favilaYes#2022-05-0611:58favilaThat “some?” check is iffy#2022-05-0611:58favilaWell maybe not#2022-05-0611:59favilaSince you know it’s getting a lookup ref#2022-05-0612:03favilaThis is more complex than I expected. If d/entid on an entity identifier (of any kind) produces a nat-int? you can emit db/cas on that entity, otherwise you emit a db add on the tempid (which needn’t be a string)#2022-05-0612:04favilaEverything else is type checking the arguments#2022-05-0612:05favilaIt may be clearer and more flexible to keep the ref and tempid separate instead of making one vector#2022-05-0612:06favilaBut this is correct, this will work#2022-05-0618:09Ivar RefsdalThanks for nat-int, I hadn't heard about it.
Thanks for your reply 🙂#2022-05-2219:43Ivar RefsdalHi again @U09R86PA4
I ended up (I think!) getting done what I wanted.
I wrote a library that "handles" duplicates/abort duplicate transactions:
https://github.com/ivarref/double-trouble
It's basically a cas function taking a checksum/sha.
It also supports tempid strings for its version of cas.
If you have any input, I would appreciate it.
Here is the code that runs on the transactor:
https://github.com/ivarref/double-trouble/blob/main/src/com/github/ivarref/double_trouble/cas.clj
Thanks and kind regards.#2022-05-0602:25jdkealyContinuing with my kubernetes rant.
If i had 2 pods on the same service, that sounds like that would cause problems, as traffic would be getting split to 2 transctors and then you can't ensure atomic transactions.
To get the H/A effect, would i make sense to have 1 deployment with 1 pod with a survice uri of "datomic" and another deployment with 1 pod called "datomic-failover", each pod starts a transactor with host: "datomic", alt-host "datomic-failover"... would that in effect be what the H/A is doing ?#2022-05-0602:27jdkealySo if writes to datomic fails it would then shoot over subsequent requests to datomic-failover ?#2022-05-0606:26Linus EricssonFrom my understanding every transactor writes their connection details into the backstore. The failover is handled by a protocol based on heartbeats.
If the first connected transactor fails to write its heartbeat - the peers connects to the second connected transactor that succeeds with its heartbeat etc. Theres probably more to it, but thats the main idea.
In practice you can just make sure to have two transactors running and they will sort out the failover stuff internally.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-05-0606:29Linus EricssonThis process should not be loadbalanced - the transactors and peers use other mechanisms to figure out which transactor is active etc.#2022-05-0619:52Jem McElwainthe service is irrelevant for the transactor, since the pods will read the address directly from storage in order to speak to it. you just have to make sure that it's routable from inside the cluster. so there are no loadbalancing considerations as far as k8s is concerened.#2022-05-0612:07nottmeyHey there, building my first Clojure library, and it aims to combine Datomic and Lacinia! 😳
Still in draft and all, but happy to read your thoughts! (And also keen to get coding feedback 🙊)
https://github.com/nottmey/datomic-lacinia{:tag :div, :attrs {:class "message-reaction", :title "heart"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("❤️")} " 4")}
#2022-05-0619:46Jem McElwainhi, just cut our peers over to use valcache, and we're seeing some "errors" every once and a while in our logs under the key :valcache/put-exception
java.nio.file.NoSuchFileException: /opt/valcache/a73/61ef3bea-4f63-4d27-92fa-cda51dcf0a73
these are logged at info, so i'm assuming are not significantly impacting our availability, but i'd like to understand a bit better what's going on.#2022-05-0619:47Jem McElwainright now we are provisioning fresh disks every time the application starts, so it's always a cold start. we have plans to snapshot/reuse disks, and i'm wondering if that would help#2022-05-0620:27favilaAre you using a version >= 1.0.6202?#2022-05-0620:29Jem McElwainyup, looks like we're currently on 1.0.6316#2022-05-0620:29favilaAnother possibility is the filesystem itself. What are you using? Are you mounting with strictatime and lazytime ?#2022-05-0620:29favilaI wouldn’t expect to see these, and I don’t see them on our valcache systems#2022-05-0620:30favilawe use xfs, but I don’t think the docs specify#2022-05-0620:32Jem McElwainusing ext4, great questions on the mount flags, let me double check to make sure#2022-05-0620:33favilaalso do you see these before the valcache fills?#2022-05-0620:33favilaor only when it’s full?#2022-05-0620:33Jem McElwainno, these are while it's filling#2022-05-0620:33Jem McElwainconfirmed the disk still has plenty of space#2022-05-0620:34favilaI mean has disk utilization reached datomic.valcacheMaxGb yet?#2022-05-0620:34favila(which can be less than filesystem size)#2022-05-0620:36Jem McElwainyup we provision based on the valcacheMaxGb setting#2022-05-0620:37Jem McElwainyeah just checked and we're missing the flags, that seems like a likely culprit#2022-05-0620:37Jem McElwainokay thanks for your help i'll try to ensure those get set#2022-05-0918:53Jem McElwainunfortunately that didn't seem to solve it! will have to dig deeper...#2022-05-0923:04favilaI would open a support ticket#2022-05-0716:58BenjaminI'm stuck on https://max-datom.com/ (thread)#2022-05-0716:59Benjaminlevel 2
=== Incorrect Query Response ===
[[["Miguel" "Dvd Rom"]]
[["J. R." "Token"]]
[["E. L." "Mainframe"]]
[["Perry" "Farrell"]]
[["Charles" "Diskens"]]
[["Miles" "Dyson"]]
[["Napoleon" "Desktop"]]
[["Segfault" "Larsson"]]
[["Kim" "K"]]]
=== Expected Query Response ===
[[["Miguel" "Dvd Rom"]]
[["J. R." "Token"]]
[["E. L." "Mainframe"]]
[["Charles" "Diskens"]]
[["Perry" "Farrell"]]
[["Miles" "Dyson"]]
[["Napoleon" "Desktop"]]
[["Segfault" "Larsson"]]
[["Kim" "K"]]]
my answer:
(ns level-2
(:require
[datomic.client.api :as d]
[max-datom.connections :refer [db]]))
(d/q '[:find ?n
:where [_ :book/author ?v]
[?v :author/first+last-name ?n]] db)
#2022-05-0717:00Benjaminah now I see the ordering is different#2022-05-0717:36jarrodctaylorThe query you are running isn’t the one expected (although it does produce very similar results 🙂) You are taking the value of :book/author ?v and using it as the entity to unify :author/first+last-name on which is probably not what you want for the answer. All you need here is:
(d/q '[:find ?v
:where [_ :author/first+last-name ?v]] db)
#2022-05-0908:37Benjaminah I see thanks#2022-05-0909:25lambdamHello,
I bumped into a behaviour of Datomic composite tuples that might be problematic for my domain modeling.
I have two ways of identifying an entity as unique : one with two attributes, one with three.
The first attribute is shared among the two.
[;; Shared ident between unique composite tuples
{:db/ident :foo/name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
;; First kind of composite tuple
{:db/ident :foo/id
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
{:db/ident :foo/name+id
:db/valueType :db.type/tuple
:db/tupleAttrs [:foo/name :foo/id]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
;; Second kind of composite tuple
{:db/ident :foo/domain
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
{:db/ident :foo/code
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
{:db/ident :foo/name+domain+code
:db/valueType :db.type/tuple
:db/tupleAttrs [:foo/name :foo/domain :foo/code]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
]
When I create some entities with the first kind (two attributes), I get the following error at transaction time :
...
:db.error/datoms-conflict Two datoms in the same transaction conflict ...
...
:d1
[17592186186287
:foo/name+domain+code
["plop" nil nil]
13194139674906
true],
:d2
[17592186187212
:foo/name+domain+code
["plop" nil nil]
13194139674906
true],
...
If I understand well the behaviour of Datomic, as soon as a key of the composite tuple exists, the other ones are automatically considered as existing event if they are not present in the entity (and take the nil value)?!
I thought that the composite tuple would exist if and only if all the keys are present on the entity, which doesn't seem to be the case.
A workaround would be to never share keys between composite tuples. Would it be the best one?
Thanks a lot.#2022-05-0921:18onetomi was just getting tripped up on a similar situation and realized that the composite key is implicitly asserted, when i was not expecting it. it made sense though, after the realization 🙂
just to confirm your example, you are saying:
EX1: when u transact
{:foo/name "NAME" :foo/id "ID"}
you expect it to actually transact
{:foo/name "NAME" :foo/id "ID"
:foo/name+id ["NAME" "ID"]}
but what happens instead is that you get
{:foo/name "NAME" :foo/id "ID"
:foo/name+id ["NAME" "ID"]
:foo/name+domain+code ["NAME" nil nil]}
EX2: while when transacting
{:foo/name "NAME" :foo/domain "DOMAIN" :foo/code "CODE"}
it should mean
{:foo/name "NAME" :foo/domain "DOMAIN" :foo/code "CODE"
:foo/name+domain+code ["NAME" "DOMAIN" "CODE"]}
but what happens instead is that you get
{:foo/name "NAME" :foo/domain "DOMAIN" :foo/code "CODE"
:foo/name+id ["NAME" nil]
:foo/name+domain+code ["NAME" "DOMAIN" "CODE"]}
did i understand it correctly?#2022-05-0921:20onetomi have the feeling, that since nil is an allowed value in tuples - and why wouldn't it be? -, if any attributes, which participate in tuple attributes, are asserted a value of, then the corresponding tuple attribute values are implied.#2022-05-0921:24onetomi have a strong feeling though, that the way you modelled the data in question, is somehow incorrect, incomplete, over simplified, or something like that.
maybe there is an other entity lurking in the data model.
if u acknowledge its existence by representing it as its own entity and use a :db.type/ref attribute to connect it to the current entity in question, then this problem would go away.
maybe u haven't done so, because in the real-world domain, some of these entities doesn't have an established name?
maybe because they don't map to some useful, real-world concept?#2022-05-1015:03lambdamThanks you for your answers.
For EX1 and EX2, yes that is exactly what I meant.
The strange thing is that nil isn't allowed as a value for regular idents (if I understood well). I don't get why it would be valid in a tuple. The annoying thing is that the composite tuple is also declared :db.unique/identity. So nil values and partially filled tuples will of course conflict with each others.
My initial understanding of the behaviour of composed tuples was that it would exist iff all keys exist, but indeed it seems to be as soon as a key exists.
The way I use it to model uniqueness of a remote system with information injected in our system. The :foo/name and :foo/domain are here for namespacing.
Some values need 2 segments to assert uniqueness, others need 3 with two segments of namespacing.
For the time being, I circumvented the problem by using different idents for the two cases (2 segments and 3 segments). But semantically, the first one is the same.#2022-05-1100:11onetomyou can provide various constraints on entities, if you want to avoid the situations with nils in tuple attrs, using :db/ensure:
https://docs.datomic.com/cloud/schema/schema-reference.html#attribute-predicates#2022-05-0911:30lambdam---
Also, are Datomic strings interned ? Or are they repeated in the DB every time they occur ?
And is there a performance difference between a keyword index and a string index ?
Thanks#2022-05-0911:50Linus EricssonI am familiar with on-prem (not Datomic Clous) so I will describe on-prem.
Keywords in transactor and peer are using standard clojures machanisms for keywords. I would recommend against generating keywords for indexing entities in the application - they are mostly handlers for humans. (Enums and attribute names are suitable for keywords!)
When strings are written to storage, they are compressed, and probably sometimes deduplicated (by serialization formats transit or fressian) in the blocks of datoms written (blocks are up to about 65 kb in size).
I dont know exactly where/when strings are being interned in the peer or transactor.
When you are using a string as an index, the index is realized in the transactor and in the peer. The strings must not nescessairly be fully realized there (could use tries or similar datastructures) but the string content is somehow loaded into memory - either in object cache or as more regulare data structures on the heap (modulo javas and the CPU:s various string optimizations).
Use strings as identity ids. Dont use generated keywords for user objects.
I would not worry about the memory usage of the indexes for ”normal loads”, whatever that means (a java char in an array or similar takes 2 bytes of RAM as UTF-16).
If you have very special requirementsnof indexes, datomic is very well suited to have indexes kept in memory on each peer, driven by the transaction log. The built-in fulltext index is such a process.
#2022-05-1015:06lambdamThanks a lot for your precise answer.#2022-05-0919:22ennWhen using a collection binding (`[?foo ...]`) binding in a query input, what happens if the value passed for that input is an empty collection? Is the query executed at all? It seems like it is not.
If that’s so, what’s the preferred way to express a query that needs to take a collection, but also needs to be able to handle empty collections?#2022-05-0921:02onetomQ1. what do you mean by the "query not being executed"?
Q2. isn't an empty input collection means there is nothing to query?
you might want to use something like (or [?e :attr ?foo] [(ground ::not-found) ?e])
don't know whether (ground nil) is allowed; might be...#2022-05-0921:43ennmight be easier with an example…#2022-05-0921:44ennHere’s a simplified version of my query:
(def my-query
'[:find ?got-here .
:in [?test-input ...]
:where
(or-join [?test-input ?got-here]
(and [(ground :match) ?test-input]
[(ground 1) ?got-here])
[(ground 2) ?got-here])])#2022-05-0921:45enn(d/q my-query [:match]) returns 1, as expected#2022-05-0921:45enn(d/q my-query [:doesnt-match]) returns 2, as expected#2022-05-0921:47ennwithout having thought about it very much, I was expecting (d/q my-query []) to also return 2 , that is, I expected the second clause of the or-join to match. Instead it returns nil. I guess because the [?test-input …] construct acts like an implicit or across all values in the input, and in this case there are none#2022-05-0921:06onetomWhen I use a non-existent :db/ident in a pull, eg: (d/pull db-val ['*] :NON-existent), I get #:db{:id nil}, which is quite convenient.
BUT, when I use a non-existent :db/ident in a query, I get an exception:
> Execution error (IllegalArgumentException) at datomic.core.datalog/resolve-id (datalog.clj:330).
> Cannot resolve key: :NON-existent
example query:
(-> '{:find [?referencing-entity]
:in [$ ?referenced-entity]
:where [[?referencing-entity :ref/attr ?referenced-entity]]}
(d/q (:val @dc) :NON-existent))
Is there some idiom for getting empty results in such case?
In the production application, the ?referenced-entity is never a :db/ident, but in tests, it's practically always a :db/ident...#2022-05-1001:23johanatanis there any way to transform a result from :find (within the query itself)? for example, to create a query that returns true if [?somevar ...] has one or more elements in it and false if it has zero elements / is empty#2022-05-1001:23johanatanor alternatively if ?a . is either nil or not nil#2022-05-1005:37thumbnailAn attribute cant be nil (only be omitted); but you can use an or/or-join to do this.
(or
(and [?e :arrt _] [(ground true) ?mybool])
(and (missing? ?e :attr) [(ground false) ?mybool]))
Here ?mybool is bound to either true or false depending if the preceding clause matches
(Alternatively this can be expressed as a datomic rule too)#2022-05-1005:41johanatanThank you. I was wondering if it might have been possible within the :where bindings #2022-05-1017:44johanatandoes that needs to be or-join or will or suffice ?#2022-05-1100:16onetomboth or clauses refer to the same set of variables (`?e` and ?mybool), so i don't think u need or-join.
> An or-join clause is similar to an or clause, but it allows you to specify which variables should unify with the surrounding clause; only this list of variables needs binding before the clause can run. The /variable/s specifies which variables should unify.
— https://docs.datomic.com/cloud/query/query-data-reference.html#or-join#2022-05-1001:23johanatan(as a boolean)#2022-05-1112:47lispers-anonymousDoes the datomic cloud client api cache connections? We have a poorly implemented caching mechanism for our datomic connections and I'm wondering if it's even necessary for us to hold onto these objects.#2022-05-1112:49Joe LaneWhat problem do you think caching the connections is solving?#2022-05-1112:51lispers-anonymousWe have a multi tenant architecture. In production, a single server can connect to one of 350+ datomic databases when fulfilling an http request. I assume the people who implemented the connection cache wanted to make sure that getting a connection was as fast as possible#2022-05-1112:56lispers-anonymousI'm making guesses though. The people who implemented the connection cache (a la memoize) are no longer with the company, commit history has nothing, chat history was lost in an acquisition.#2022-05-1112:58Joe LaneOnce a connection is acquired, it can be used indefinitely without perf penalty. Connections are also thread safe.#2022-05-1113:01lispers-anonymousSo they are safe to cache, I figured that much since we've been doing it for years without much issue. I'm guessing the datomic client api doesn't cache them then, and if we want them cached we should continue managing that ourselves.#2022-05-1113:03Joe LaneThat's correct, the client does not cache them.#2022-05-1113:05lispers-anonymousRight on, thank you Joe. I'll be changing the code to use something like core.cache instead of memoize to manage these things.#2022-05-1113:11Joe LaneAn atom may be sufficient as well.
FWIW, although it may be quick to acquire a new connection, that does not mean the compute-group the connection object routes the request to has the DB Spun up and there may be a small delay while that DB is loaded.
If you wish to avoid this, you can create query-groups specific to a DB or group of DBs to ensure they always remain loaded (e.g. by querying that DB occasionally). The tradeoff here is that all DBs in that compute-node compete for resources.#2022-05-1113:30lispers-anonymousYeah, we think we have observed this behavior (delay from the query group loading the DB). Right now all our databases use the same query group. We're also working on changes that will allow us to spread our database connections across a number of query groups instead funneling them into one. There is a good bit of work we have to do to make that happen but it is underway.#2022-06-1600:51Jake Shelby@UDVJE9RE3 the last point under connections here indicate that connections are indeed cached and creating them is inexpensive https://docs.datomic.com/cloud/client/client-api.html#connection#2022-06-1718:47lispers-anonymousYeah, that documentation contradicts the behavior I've seen though, and what was said in this thread. The connection objects returned are for sure not cached. The instances are not identical across calls to db/connect.#2022-06-1718:49lispers-anonymousIt also doesn't feel inexpensive. There are network calls being made. I see a significant delay when I call d/connect. Sometimes as much as 1 full second.#2022-05-1115:51neilprosserI'm going to ask this as a separate question but it relates to the question above about connection caching. Hopefully this all makes sense. We cache the connections in our system but I've found today that when I used d/sync on my cached connection using a dodgy value for t (in my case I mistakenly put a tx in there, which is a bigger int) datomic.client.impl.shared/advance-t* has changed the t value of the connection state but the stale connection checking which is done in datomic.client.impl.shared/recent-db will never correct that state because the dodgy value is always greater than the status received from the remote call. I get Database does not yet have t={...} and it will keep giving me that error until I flush that connection from the cache.
So, my question is, is there a way that connection can be brought back to life or is this just something we need to be very careful about when caching connections?#2022-05-1116:16Joe Lane@neilprosser Passing a future t to d/sync is UB, be sure to avoid doing that.#2022-05-1116:19ghadi@neilprosser be sure not to pass t 's from untrusted clients, like browsers{:tag :div, :attrs {:class "message-reaction", :title "point_up"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("☝️")} " 1")}
#2022-05-1116:20ghadione simple technique I've used when interacting with untrusted clients, is to make an authenticated 'cookie'
server knows some secret...
cookie = HMAC(some_secret, t)#2022-05-1116:21Joe LaneOne might even it a zookie 🙂#2022-05-1116:24neilprosserThanks @lanejo01 and @ghadi. I figured it might just have to be a 'be more careful next time and don't do that'. I hadn't realised that it was a big problem until I broke it today.#2022-05-1116:41favilaIs there an officially-supported way to get a T from a TX in the client api?{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 1")}
#2022-05-1116:43favilaI heard a scary thing that cloud’s entity id structure isn’t guaranteed, so masking out the partition bits like on-prem d/tx->t doesn’t work. So what’s the alternative?#2022-05-1214:44lispers-anonymous(bit-and eid 0x3ffffffffff)
#2022-05-1214:45lispers-anonymousThat is what we use in cloud. Cognitect told us about it#2022-05-1214:45favilaso cloud’s entity id structure is still the same?#2022-05-1214:45favilaor at least the-same enough?#2022-05-1214:46lispers-anonymousIt has not failed us so far, but we do not use this in production code. Only when debugging or auditing our databases#2022-05-1117:19nottmey@ghadi Oh, why is that? I wanted to build a consistency mechanism in my application by returning t to my client and then using it for the following queries which need to be in a consistent state with the data the client has send before.
(I assumed t is exaclty meant for that purpose 😅)#2022-05-1117:25favilaThe “why” is that this is a DoS vector.#2022-05-1117:25favilathey could supply a T far in the future, or discover and exploit some bug in datomic with invalid values#2022-05-1117:27favilaWe do this too (on-prem api though), but 1) we parse the T. 2) we limit its range 3) we check that it isn’t too far “head” of the current T 4) we put a timeout on the deref to wait for the sync to complete.{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 1")}
#2022-05-1118:21nottmeyI see, thank you for pointing that out. I didn’t even think about higher values :man-facepalming:
Ok, I imagine that’s manageable. -> Wanted to go for a low timeout retry mechanism (with exponential backoff) anyway.#2022-05-1118:11zakkorHi guys! I am trying to create a datomic schema for a data structure that looks something like this: (some fields omitted, but not important)
{:url url
:negotiable? ""
:agency? false
:raw {:title "asdasd"
:description "asdasd"}}
I am having trouble figuring out how I am supposed to represent the nested map.
Assuming my top-level entity is called a "posting", should I have something like...
:posting/url
:posting/negotiable
...
:posting/raw.title
:posting/raw.description
(I don't even know if the dot syntax is a thing, I'm just guessing)
Or perhaps something like making :posting/raw a ref type with isComponent? but then I'm not sure how I'm supposed to define its fields#2022-05-1118:19jcfIf you know the keys and value types you'll need in the nested map, a component entity would make sense. You'd only have to install each attribute and could then transact the nested maps.#2022-05-1118:20zakkor@U06FTAZV3 I think I've got it, something like this?
[(create-attr :posting/url :string)
(create-attr :posting/raw :ref {:db/isComponent true})
(create-attr :raw/description :string)
(create-attr :raw/phone :string)])
(excuse my constructor function)#2022-05-1118:21zakkor(d/transact conn [{:posting/url "http" :posting/raw {:raw/description "desc" :raw/phone "0723"}}])#2022-05-1118:22jcfThat looks about right to my naked eye. 🙂#2022-05-1118:23jcfI've used a convention for nested things that worked quite well in the past. The raw bits (assuming they only show up inside postings) would be called :posting.raw/description etc.#2022-05-1118:23jcfThat can make destructuring things quite tidy, and can line up with your namespaces. #2022-05-1118:24jcfNot a requirement at all. Your names are your business. 🙊 #2022-05-1118:24zakkorYeah, I was going to say that as a newbie, it feels a bit weird to make :raw/something a "global" datom, when it only makes sense in the context of a :posting{:tag :div, :attrs {:class "message-reaction", :title "100"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💯")} " 1")}
#2022-05-1118:25zakkorcalling it :posting.raw is just a naming change, or will it actually let me operate on the "raw" as a map when transacting/querying?#2022-05-1118:25jcfSounds like you're off to a flying start with the way you're conceptualising this stuff!#2022-05-1118:27jcfIf you prefix things with posting.raw/… Clojure will help with destructuring but Datomic itself will just see these as names. I can't think of a place where any sort of formal hierarchy shows up on the database side of things. They really are just names.#2022-05-1118:29jcfIn Clojure it's nice when you can do things like this:
(let [{:posting.raw/keys [description]} posting]
(str "Description is " description))#2022-05-1118:30jcfThat's not a great example because you'd probably just access the map directly but with realistic code… 🙂#2022-05-1118:31zakkorI see what you mean 😄#2022-05-1118:50zakkor@U06FTAZV3 thanks a lot for your help!{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-05-1218:30mdaveThere is an example in the docs under Functional Expressions which says:
;; this query will not work!!!
[:find ?celsius .
:in ?fahrenheit
:where [(/ (- ?fahrenheit 32) 1.8) ?celsius]]
;; use multistep instead
[:find ?celsius .
:in ?fahrenheit
:where [(- ?fahrenheit 32) ?f-32]
[(/ ?f-32 1.8) ?celsius]]
On the other hand, the below example works for me. Just wondering why a single lambda is not considered to be a cleaner approach compared to the multi step calculations.
[:find ?celsius
:in ?fahrenheit
:where
[(#(/ (- % 32) 1.8) ?fahrenheit) ?celsius]]#2022-05-1317:07favilaI’m kinda surprised the lambda works in datomic cloud with the client api#2022-05-1317:09favilaconceptually, the query is data, not code, and isn’t meant to have eval run on it. There may also be inefficiencies from creating multiple function objects, but I don’t know where it’s created and whether it’s created more than once.#2022-05-1317:10favilaIOW this feels like a bug and you shouldn’t rely on this behavior; it may even be a security hole waiting to happen.#2022-05-1317:11favila(I am not a Cognitect though)#2022-05-1416:27onetomi've also used the lambda approach in the past, for the same reason, to get around the limitation of recognizing query variables only at the top level of the expression, and not in nested levels.#2022-05-1219:35zakkorAre you supposed to transact the schema every time you open the db connection?#2022-05-1219:38pyryNo, transact only if something needs to change.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2022-05-1307:49Linus EricssonIf you are using on-prem, conformity is good: https://github.com/avescodes/conformity#2022-05-1416:32onetomi usually have some kind of an ensure-schema function, which speculatively transacts the schema, with d/with and if the :tx-data of its transaction result would have more than 1 datoms, then i actually transact it.
it's not the cleanest if you have a cluster of compute instances, but we don't have such complicated needs.
i've also generalized this to an arbitrary sequence of transactions, because we also need to ensure some seed data, not just schema attributes.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-05-1308:27Ivar RefsdalHow do people write database functions (on prem)?
Using quote (`'`) as per the https://docs.datomic.com/on-prem/reference/database-functions.html#cancel doesn't feel very ergonomic.#2022-05-1308:34Ivar RefsdalI remember seeing some gist macro-thing for that a long time ago, but I can't find it anymore#2022-05-1312:58ennIf you use Conformity to apply your schema it’s just EDN so you don’t have to quote the function definitions.
That said, life got a lot easier when we switched to using classpath functions rather than defining them inline in the tx data.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-05-1314:36Ivar RefsdalRight. I've used conformity. I don't it/EDN makes for more ergonomic function definitions.
Classpath functions: Aha. I didn't try that.
Only problem I see is a slow/somewhat cumbersome development/deploy cycle...
I wrote a small rewrite-clj-thing that extracts a single namespace into a transactor function, maybe I should make that a public library.
It has worked well for - well - single namespace functions so far.#2022-05-1315:13ennyeah, the deploy cycle is more cumbersome, but the development cycle is IME much better. You can develop them as normal Clojure functions, test them with or without a database, etc.#2022-05-1315:13enn(with classpath fns)#2022-05-1316:33Ivar RefsdalRight. Thanks :-)#2022-05-1317:32Ivar Refsdal> You can develop them as normal Clojure functions, test them with or without a database, etc.
This is also what is possible with the rewrite-clj-"strategy", limited as of now to a single namespace.#2022-05-1422:32dazldI really like Valentin’s https://github.com/vvvvalvalval/datofu{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-05-1608:19Ivar RefsdalThanks @U3ZUC5M0R, I hadn't heard about datofu. However I don't think/see that it offers better ergonomics for writing database functions though.#2022-05-1608:34dazld@UGJE0MM0W you saw the db-fn helper?
(def foo (db-fn :some/fn
'{:lang "clojure"
:requires ([datomic.api :as d] [clojure.string :as str])
:params [db args]
:code (let [{:keys [foo]} args]
(prn ::hello-fn))}))#2022-05-1609:15Ivar RefsdalYes, I did, and thanks again. I think the easiness of using that style is about the same as the example in the documentation that uses quote#2022-05-1911:04Ivar RefsdalHere is my take on this:
https://github.com/ivarref/gen-fn
Feedback is appreciated 🙂
CC @U3ZUC5M0R @U060QM7AA{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2022-05-1516:38BenjaminCan I mess with query groups by accidentally pushing and deploying compile errors to the main compute group?#2022-05-1815:29AthanGreetings Datomic Clojurians, I have just started my journey on Clojure ecosystem. I would like to thank in advance everyone in the group for sharing experience and helping each other.{:tag :div, :attrs {:class "message-reaction", :title "wave::skin-tone-2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 1")}
#2022-05-1817:55Daniel JompheYeah, welcome Athan! Glad to exchange in the future too. 🙂#2022-05-1818:06AthanThank you all for the warm welcome ✌️#2022-05-1823:47steveb8nQ: is there a way to make a Lambda call Datomic cloud via http-direct? I suppose this is really a VPCLink question. Has anyone got this working?#2022-05-1920:05jdkealyhow can you find out how many datoms are in your database ?#2022-05-1920:09kennyClient api or on-prem?#2022-05-1920:09jdkealyon-prem#2022-05-1920:14ghadihttps://docs.datomic.com/on-prem/clojure/index.html#datomic.api/db-stats#2022-05-1921:47jdkealythank you!#2022-05-2015:41zakkorI'm trying to add a where clause based on some condition (basically, if the min-surface variable is not nil).
This doesn't work, but neither does putting the (when) outside of the [] clause. Any tips?
(d/q '[:find (pull ?e [*])
:in $ ?min-surface
:where
[?e :posting/surface ?surface]
[(when true (>= ?surface ?min-surface))]]
(d/db conn) 30)#2022-05-2015:51Joe Lane@edward.partenie Check this out https://clojurians.slack.com/archives/C03RZMDSH/p1620240039467800?thread_ts=1620237584.465800&cid=C03RZMDSH{:tag :div, :attrs {:class "message-reaction", :title "pray"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙏")} " 1")}
#2022-05-2520:47johanatandoes the pull many api support sorting? i'm not seeing it in the pull pattern grammar but just wanted to double check#2022-05-2521:03thumbnailNo. As far as i know the reason for this is that sorting in the db isnt more efficient (at least when using peer)#2022-05-2521:04johanatanYea it's more of a syntactic cleanliness issue for me. Would rather pack as much “logic” into the db queries as possible #2022-05-2606:33thumbnailWell, your app “is” the database when using datomic. Its doing the query-logic itself etc too.
But i know the feeling 😅 have been searching for ways to support sorting / pagination in the past too#2022-05-2521:44jdkealyany way to enforce a character limit on a text field in datomic.#2022-05-2521:45favilaAttribute predicate #2022-05-2521:50jdkealyis it possible to add a predicate to an attribute after it's been installed#2022-05-2522:23jdkealyi see it is yeah#2022-05-2522:29jdkealyhow does datomic find the classes for things like attribute predicates? I installed a predicate but it says it can't find my namespace.#2022-05-2522:37jdkealyAttribute predicates must be on the classpath of a process that is performing a transaction.
Does that mean it needs to be in the transactor ?#2022-05-2600:35favilaYes#2022-05-2600:36favilahttps://docs.datomic.com/on-prem/reference/database-functions.html#classpath-functions#2022-05-2600:36favilaYou set an env var#2022-05-2600:38jdkealyi was looking for a quick way to stop a user from inputting like 20k characters into my API and then storing it in datomic. It seems that i'll have to rethink deployments and source code going this route. I'll just more tightly control the HTTP API. This is definitely interesting, but was not expecting to ever have to touch the transactor's container after starting it.#2022-05-3002:14Linus EricssonThese checks should most likely be done in the peer-process, not the transactor. Also you can transact DB functions to the transactor if they are just using standard clojure/java.#2022-05-2522:52jdkealyi can't seem to make it work on either#2022-05-2522:55jdkealyis it possible to use an installed function as a predicate ?#2022-05-2608:10tomekwHi, I've a quick transact vs transact-async question. Docs state:
Returns a completed future that can be used to monitor the completion
of the transaction. If the transaction commits, the future's value is
a map containing the following keys:
What does completed mean here? Is it a realized? future or CompletableFuture ?#2022-05-2612:03favilaD/transact blocks waiting for the future to complete, subject to a global, system-property-tunable timeout#2022-05-2612:03favilaD/transact-async never blocks#2022-05-2612:03favilaThat is the only difference#2022-05-2612:03favilaImportantly, both return the same future#2022-05-2612:04tomekw👍 thank you#2022-05-2612:06tomekwso, transact returns a realized? => true future, correct?#2022-05-2612:07favilaOr throws#2022-05-2612:07tomekw👍#2022-05-2617:06jdkealySomeone pen tested my app and made a bunch of test records with like 10k+ character long strings.
I'd like to find all of them and delete them. Is there a way to count the string length and grab all entities ? I tried this, doesn't seem to work
(d/q
'[:find ?e
:in $
:where
[?attr :db/valueType :db.type/string ]
[?e ?attr ?val]
[(> 300 '(count ?val))]
] (_d))#2022-05-2617:32favilaYou need to bind the count separately. You may be better off using d/datoms for this#2022-05-2617:32favilaQueries like this use a lot of memory#2022-05-2617:32jdkealycool thanks a bunch!#2022-05-2617:33favilaNote that the large values will still be in your history indexes#2022-05-2617:33jdkealyhow problematic is that ?#2022-05-2617:34favilaDepends on your database#2022-05-2617:34jdkealyusing dynamo#2022-05-2617:35favilaHow many of them, how much history reading you do, how much novelty you produce, what your fragmentation is like, etc#2022-05-2618:17jdkealygot it thanks. it's just a test cluster, i won't worry abuot it for now#2022-05-3015:02dazld(on-prem) I’m seeing messages like this one during an import :
[WARN] [org.apache.activemq.artemis.core.client] - AMQ212054: Destination address=os-xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx.tx-submit is blocked. If the system is configured to block make sure you consume messages on this configuration.
I’m guessing it’s because the transactor is blocking? (using d/transact-async )#2022-05-3018:25ghadiwhen you get responses from transact-async, are you checking them?#2022-05-3018:25ghadisome responses indicate backpressure e.g. stop submitting txes{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-06-0115:01dazlddo you know if there an example of what a message indicating backpressure looks like somewhere? It’s quite a tricky thing to simulate#2022-05-3015:22zalkyHey all, wondering if there's any way to reference the unification set of a variable in a where clause. Something like:
;; more clauses
[?e :attr ?v]
[(count [?v ...]) ?count]
;; more clauses
#2022-05-3015:37favilaSubquery #2022-05-3015:41favila[(q [:find ?v :in $ ?e :where [?e :attr ?v]] $ ?e) ?vs]
[(count ?vs) ?v-count]
[(identity ?vs) [[?v]]]
#2022-05-3015:42favilaOr a function call doing something similar (that may save memory)#2022-05-3015:42favilaAggregates are very much not datalog’s strength#2022-05-3015:42favilaUsually stuff like this is done in the :find or outside the query altogether#2022-05-3015:48zalkyThanks! I had broken it up into two queries, but I was curious if there was a simpler approach.#2022-05-3020:05Patrick BrownHEY ALL! So, I've got this error that is happening in the middle of a long chain of functions. I'm hoping someone can tell me the root cause, because I'm having a hard time seeing why it started happening. Cheers!
Execution error (ExceptionInfo) at datomic.core.error/raise (error.clj:55).
:db.error/not-in-system-partition Value of :db.install/attribute must be in :db.part/db partition, found :person/name#2022-05-3020:25favilaBad transaction data#2022-05-3020:41Patrick BrownYup, I just arrived to tell people not to worry about me. I dug it out. Thanks @U09R86PA4 for coming quick to the rescue. CHEERS!#2022-06-0111:27Ivar RefsdalHi.
For on-prem metrics callback is seems that StorageGetMsec/avg or median is not a thing.
Is there any plans to add this?
Seems like a easy feature to add, and helpful in combination with hi, lo, etc. that is already there.#2022-06-0120:54favilahttps://docs.datomic.com/on-prem/operation/monitoring.html#metrics-outside-aws#2022-06-0120:54favilaStorageGetMsec (and all StatisticSet metrics) produce enough info to produce an average{:tag :div, :attrs {:class "message-reaction", :title "fire"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🔥")} " 1")}
#2022-06-0208:08Ivar RefsdalThanks :thumbsup:#2022-06-0118:17lostineverlandFYI: in the datomic documentation https://docs.datomic.com/cloud/query/query-data-reference.html#implicit-joins the Janis Joplin query is missing from the code block and is oddly appended after the code block.#2022-06-0207:38Kris CAre there any Datomic training courses available, preferably with some kind of certification? So far, I have found only this: https://learndatomic.com/. Looking for recommendations. I am already quite proficient with Datomic on-prem - have started a project, written transaction functions etc but I am looking to further extend my Datomic knowledge...#2022-06-0214:30jarrodctaylorHi Kris,
There are no certifications offered for Datomic.
As far as other learning opportunities you can also check out https://max-datom.com{:tag :div, :attrs {:class "message-reaction", :title "point_up"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("☝️")} " 2")}
#2022-06-0308:28Kris CThanks, @U0508JRJC.#2022-06-0320:54manutter51Anybody ever seen issues with the transactor restarting itself every few minutes due to failed heartbeat? We’ve got plenty of RAM, plenty of disk space, including plenty of space for the logs, low network latency to the MSSQL backend, low load, nothing in the logs to indicate why the heartbeat keeps failing?#2022-06-0321:00favilaI would look for JDBC connection issues#2022-06-0321:02manutter51That would cause a heartbeat failure?#2022-06-0321:02manutter51You’re talking JDBC connection from datomic back to the MSSQL back end?#2022-06-0321:03favilaThe “heartbeat” is the transactor writing its address into storage periodically. It’s part of the HA failover system#2022-06-0321:03favilaI think it’s the id “pod-coord” in the sql backend.#2022-06-0321:04favilaI don’t remember exactly. It starts with “pod-”#2022-06-0321:04favilaso the heartbeat failure would mean the transactor couldn’t write to MSSQL#2022-06-0321:05favilaif this doesn’t happen immediately on startup, that suggests some JDBC connection-level thing is wrong. maybe a connection count is exceeded, maybe MSSQL killed a long-running connection, maybe there’s TCP issues lower down the stack.#2022-06-0321:05manutter51Ok, I’m seeing references to heartbeat in the logs on New Relic#2022-06-0321:06favilaThis heartbeat entry is also how peers find the transactor#2022-06-0321:07manutter51The problem happens at irregular intervals, but we’re definitely starting successfully and running for a good few minutes.#2022-06-0321:07faviladoes it happen only when there’s an idle transaction period?#2022-06-0321:07favilaalso, are you running on google cloud?#2022-06-0321:08manutter51No it seems to only happen when we try to execute transactions.#2022-06-0321:08manutter51And we’re not on Google Cloud#2022-06-0321:10favilado any transactions succeed?#2022-06-0321:10favilaever?#2022-06-0321:10manutter51Yes, we’re getting some throughput, we’re just having frequent interruptions.#2022-06-0321:11favilahm. yeah, I would look very closely at JDBC driver settings and MSSQL settings#2022-06-0321:11manutter51I’ll pass that on, thanks much!#2022-06-0321:12favilaand try to correlate the interruptions with either idle time or high load (such as an indexing job).#2022-06-0321:12favilaidle time maybe triggers timeouts or connection closes; high load maybe causes instability or forced closes#2022-06-0321:13manutter51:thinking_face:#2022-06-0321:14manutter51Ok, that’s some stuff we can look into, thanks again#2022-06-0707:19Ivar RefsdalCould it be that the MSSQL backend server has a short max time that a connection can "live"?
From what I know about tomcat-jdbcpool, which Datomic uses, the default is to never close a connection when it is returned to the pool.
It's controlled by the https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html#How_to_use property. Unfortunately I don't think you can control this from/in Datomic.#2022-06-0711:23manutter51We ended up restarting the SQL Servers in the HA group (which was a non-trivial operation), and that seems to have resolved the issue, so it does look like a problem with the underlying storage rather than with datomic itself.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-06-0712:52Ivar RefsdalAre you running your own SQL servers "on premises"?
For our MS Azure PostgreSQL setup we needed to increase the number of IOPS or so. The database was struggling a lot before that. I don't recall how that manifested itself exactly, but I think with dead/dropped connections.#2022-06-0712:52manutter51Thanks, I’m not involved on the SQL Server side of things at all, but I’ll pass that on.#2022-06-0713:02Ivar RefsdalIf you are running the database in the cloud or "paas", I'd also recommend setting socketTimeout on the peers:
https://ask.datomic.com/index.php/631/blocking-event-cluster-without-timeout-failover-semantics?show=700#a700{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-06-0322:01jdkealyIs there a way to drop a database without using a peer ? What would be the fastest way to drop a db ?#2022-06-0322:07favilaWhat problem are you trying to solve? Do you have multiple databases in the same storage+transactor, or do you want to deprovision the whole stack as fast as possible, or something else?#2022-06-0322:08jdkealyI have a script that restores a DB, can't restore a DB with an existing DB of same name, so i want a quick one-liner that doesn't require devs (who don't know clojure) to drop and restore#2022-06-0322:09favilawhat kind of storage?#2022-06-0322:09jdkealyjust local dev#2022-06-0322:10favilawould it make sense to unconditionally rm -r the data directory?#2022-06-0322:13favilaor even to not use datomic-level backup/restore but to distribute the h2 files#2022-06-0322:13favilathen it’s just file copy#2022-06-0322:14jdkealyoh really? Even if it's from dynamo ?#2022-06-0322:15favilano, I’m assuming you’re using the same storage#2022-06-0322:18favilastorage level operations are probably always going to be fastest. I don’t know if you care about what else may be in the local dev. If it’s just distributing readonly replicas, you could backup from dynamo, restore into h2, then distribute the h2 files to all the devs and just blow away whatever h2 files they have already.#2022-06-0322:19favilaif you do care about what else may be in those local dev databases, then I think you need d/delete-database or d/rename-database, a more interactive datomic-level restore, etc#2022-06-0322:21favilaRemember d/delete-database doesn’t reclaim storage#2022-06-0322:21favilahttps://docs.datomic.com/on-prem/operation/capacity.html#garbage-collection-deleted-dev#2022-06-0322:37jdkealythanks!#2022-06-0322:37jdkealyWas list-databases removed from the datomic peer api ?#2022-06-0322:38favilaare you thinking of get-database-names?#2022-06-0322:38jdkealy#2022-06-0322:39favilathat’s the client api#2022-06-0322:40jdkealyoh gotcha#2022-06-0322:40favilaI don’t know why they decided to make these different#2022-06-0322:40favilapeer api predates the client api#2022-06-0322:41jdkealythanks!#2022-06-0610:15pieterbreedHi all 👋 - I'm looking for guidance on setting up datomic cloud, esp in environments where prod and dev are separated into different AWS accounts. Anybody here have something for me to look at, or wanting to offer advice?#2022-06-0613:30Daniel Jomphe👋:skin-tone-3: Hi Pieter, that's our setup here too. It's very easy because each account is isolated; therefore it's very easy to bring symmetry to your tooling.
When you'll have specific questions, I invite you to post them all separately in the channel (just like you did about the GitHub Action), and some of us will definitely be happy to step in to provide advice.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-06-0613:11pieterbreedAlso - I found this github action in the marketplace for deploying ions (https://github.com/marketplace/actions/datomic-ions-deploy) Is there somebody here that uses it and can vouch for it?#2022-06-0613:35Daniel JompheI don't remember finding this one a few years ago when we started with Datomic Cloud and Github Actions.
They use a custom Docker image that they own.
Here I preferred to follow GH's advice to keep our Actions in their syntax if possible, to benefit from quicker actions (if I remember well). This adds a small learning curve about GH's yaml config, though. If you already configure your developers with Docker, you might profitably go ahead with a Dockerfile in GH Actions. (We do, but with VS Code dev containers.)
What we ended up doing is creating ourselves a wrapper script to handle our deployments from our local machines, and then call that same script from the GH Action, using GH Secrets to authorize those calls.
We needed to do quite an amount of AWS IAM config to authorize GH's machines to do that, though. And we needed to repeat it for each one of our AWS accounts since each one of our Datomic Cloud environments is in a distinct AWS account.#2022-06-0613:47pieterbreedOK, thank you for the feedback. I guess I have one more question related to this: I am used to the idea of "promoting" artifacts between environments. I'm curious how you guys set up your flow. Specifically, if you deploy from github how do you decide which branches/tags go to which environments?#2022-06-0613:47pieterbreed(From what I understand of datomic cloud; the artifact is part of the system, so is stored in the same AWS account as the ions, thereby making "promoting" of artifacts difficult)#2022-06-0613:57Daniel JompheYeah, we don't store artifacts to test, vet and promote.
We start a new build for each environment/account
So one could say we test-vet-promote git commits.
• development branch is auto-deployed to an env
• development branch is nightly-deployed to another env
• main branch is auto-deployed to main env (after merges from development branch)
#2022-06-0613:57pieterbreedawesome, thank you! :thumbsup:{:tag :div, :attrs {:class "message-reaction", :title "+1::skin-tone-3"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 1")}
#2022-06-0613:58Daniel JompheIf you want to eliminate builds, you could copy from the S3 code bucket of Datomic in one account to the other one, I suppose, but this is clearly out of supported territory. You'd need to make sure you understand how what you do plays in AWS Code Build territory. But I think it's workable and completely possible to make it practical.#2022-06-0614:00Daniel JompheFrom my perspective, Datomic Cloud's tools do just that, starting from a git repo. Connect them to this or that AWS account, and they perform the copy themselves, if you look at it this way.#2022-06-0614:03pieterbreedYeah - that was my understanding too. Every combination of tools and service providers are every-so-slightly different, and hearing you describe your setup affirmed that 1) I wasn't going off track (much) and 2) the solution I have in mind is at least workable.#2022-06-0614:20Kris CAnyone of you using Datomic in production? Could you please share your experiences? Primarily interested in negative experiences, since I came upon this comment on HN:
I went to a Clojure meetup one time and they all went on about how using Datomic in production is a nightmare and it's generally an over-engineered product that isn't worth the trouble in the end. Do most people who have dealt with Datomic in production feel this way?
We have adopted Datomic (on-prem) for a project and, so far, I really like it a lot, but I want to prepare myself for any future problems...{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2022-06-0615:54favilaThe biggest gotchas I’ve had (on-prem) are all operational; “over-engineered” is definitely not what I would call it. A big strength of datomic is also a big headache: it’s difficult to get rid of stuff (bad schema, too-large values, mis-partitioned data, too much data, etc) as you scale; network consumption and storage consumption and object-cache locality from index churn (or just sheer volume of data) become big problems and cost drivers that can’t be solved easily. Often you have to essentially start over (i.e. decant), which is not a casual operation.#2022-06-0615:57dvingoI've worked with Datomic cloud at two jobs and overall experience was not positive.
Major cons:
• closed source - your hands are tied when trying to solve your own problems
◦ all you can do is reach out for support which in the places I've worked the feedback cycles were very slow (multiple days)
• no introspection to the query engine
◦ any query performance problems are incredibly difficult to analyze because you have no data - you have to guess and check
◦ related to this: clause order matters in the version of datalog used by datomic - thus your query performance may depend (very significantly) on clause order - and because it's not just clause order but number of datoms that match each clause that affect performance, you can see a query that is performant today become slow as the distribution of datoms changes - this means you have to worry about this all the time when writing queries as a sort of low-level background paranoia
• This is not really about the design per-se but perhaps the marketing and messaging:
◦ I've come to believe that the history API is best to be avoided for application features - it is wonderful for operational insights and post-hoc investigations of provenance, but because all you have is transaction time and not a concept of "valid time", you're screwed if you want to, for example, migrate a DB using a tx import (your tx times will be mutated)
Those are the really big ones. The wonderful gift that datomic brought to us (well me at least) was reviving datalog as a query language combined with the attribute model of representing information.
I am completely confused by the closed source nature of datomic, especially when there are lots of examples of a dual open/closed setup (mongo, neo4j, cockroachdb).
At this point though, we have XTDB, datalevin, asami, and datahike, which provides us with lots of (open source and gratis) options to utilize datalog and attribute modeling in our software{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 3")}
#2022-06-0616:06favilaThings I wish someone told me years ago (on-prem-specific to some degree):
• do not ever put any largish string into datomic (4k is the largest I would ever contemplate, preferably much shorter), and especially do not fulltext index them. (Maybe don’t fulltext index anything, because you can’t drop it!) These are hard to get rid of once you put them in.
• Pay attention to identifier schemes and entity id partitioning to increase data locality, it will save you later.
• Pay attention to routing in a partition-aware way to increase peer locality.
• Do not rely on history features for customer-facing functionality on a long timescale: materialize your history (the problem here is being tied to old schema and “fixing the past”).
• Have a plan for regular (but targeted) excision to control history size once storage gets expensive. (this may take years, though)--not all attributes have equal churn, and the value of history often decays over time and even becomes a liability (cost, compliance, exposure to breaches, etc).
• Avoid d/entity (prefer query or pull) unless you know what you are doing.
• Use attribute predicates and entity predicates early on.
• Think carefully about how you design your transactions for data races--any dependent read in a peer is a race waiting to happen, and datomic doesn’t have many tools out of the box for managing this. This is a warning especially for those used to traditional single-process, blocking-transaction databases (i.e. any SQL db){:tag :div, :attrs {:class "message-reaction", :title "pray::skin-tone-3"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 1")}
#2022-06-0616:11favilaI concur with @U051V5LLP pretty much.#2022-06-0616:20favilaThe good parts:
• Attribute-centric modeling plus datalog querying is amazing, even with the occasional badly-ordered query. It’s at least good to know that what you write is what you’ll get, but even accepting that there’s room for improvement: knowing what index a clause will use, or knowing what clauses/rules are contributing most to a result set size or CPU time.
• Having an “easy” transaction queue for stream-based processing.
• The peer model for scaling reads.
• History for internal-facing auditing and debugging. It’s a blessing and a curse. I really wish there were more knobs here other than history/no-history. Some attributes you really want everything forever, some are valuable for a few weeks or months and then just contribute to history index cost and churn. But I don’t agree with e.g. datalevin that it shouldn’t exist.#2022-06-0616:31favilaBTW, when I mean “at scale”, this is from maintaining a (now) 16 billion datom database over 7+ years with a multi-tenant workload.#2022-06-0618:42Kris CThanks for great info, guys 🙏#2022-06-0707:31Kris CAnyone else cares to share their experience with Datomic (on-prem) in production?#2022-06-0707:34Ivar RefsdalI'll share some during the day 🙂{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-06-0708:28octahedrion@U09R86PA4 what do you mean by "materialize your history" ?#2022-06-0710:06octahedrionWRT long texts, is there a case to be made for storing long texts as structures of smaller texts ? For example, storing a document as entities like nested HTML elements (paragraphs, lists etc), or even going so far as to represent texts as structures of words.#2022-06-0712:08favila“Materialize your history” = represent history that is customer facing explicitly with schema and data you design. You would read this “history” data using the current database, not a history database. (Or you could represent it out of datomic entirely){:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-06-0712:12favilaRe: storing large texts as structures of smaller texts: unless you have some use for that, probably not. Semi structured text doesn’t usually assign identity to its elements so updates would be hard.#2022-06-0712:12favilaJust seems like unnecessary complexity most of the time#2022-06-0712:26dvingo@U09R86PA4 i'm interested why you suggest not using entity API (I'm curious, I don't have any strong opinions here)? is it due to performance?#2022-06-0712:32favilaIt encourages code patterns that don’t have an easy to re-examine boundary between a “data access” layer (where you can put an interface you change at a different cadence from the schema) and the data consuming code. This also makes people use entity walking with Clojure code to implement queries instead of actual queries (more familiar but usually less clear and inefficient because it always uses EAVT indexes). And it’s an invisible source of lazy IO which makes reasoning about performance and profiling hard#2022-06-0712:32favilaAnd it makes it impossible to use the client api#2022-06-0712:33dvingovery useful info - thanks for explaining#2022-06-0712:35favilaIt’s sometimes exactly what you need though. Eg it’s a good replacement for any time someone would normally be reaching for a data source pattern#2022-06-0712:35favilaEg it’s a great fit for lacinia resolvers#2022-06-0712:36favilaWhere it’s difficult to predict what you will need#2022-06-0712:37favilaIf you can’t predict what you’ll need it’s a very performant alternative to what is usually done, which is n+1 madness{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-06-0712:37favilaBut it’s better to know what you’ll need and make a query or pull expression up front for code organization purposes#2022-06-0618:06jdkealyif i set up memcached, can i set my memoryindex and objectcache to 0 ?#2022-06-0618:10favilano#2022-06-0618:47favilamemcache/valcache is to reduce pressure on your storage, not on your peer size. objectcache must still be big enough for the working set of your queries, and memoryindex controls how frequently you index (you can’t index continually).#2022-06-0619:48Vishal GautamHello 👋,
I am trying to learn more about datomic rules by following this tutorial: https://www.youtube.com/watch?v=7lm3K8zVOdY&t=864s&ab_channel=ClojureTV
While invoking owns? function I am getting this error
java.lang.IllegalArgumentException
:
"Cannot resolve key: 24a96e20-f526-4f7f-ba38-4f684caa5607"
Here is the full code. 🙏
(def owner-rules
'[[(owns? ?cus-id ?e)
[?e :customer/id ?cus-id]]
[(owns? ?cus-id ?e)
[?e ?ref-attr ?r]
(owns? ?cus-id ?r)]])
(defn owns? [cid pid db]
(d/q '{:find [?pur]
:in [$ ?cus-id ?pur %]
:where
[(owns? ?cus-id ?pur)]}
db cid [:purchase/id pid] owner-rules))
;; throws error :(
(comment
(owns?
#uuid "0fb7ea94-44af-46fa-98ca-0ddb5eb23123"
#uuid "24a96e20-f526-4f7f-ba38-4f684caa5607"
(d/db conn)))#2022-06-0713:53KeithCan you post the full stack trace? It looks like Datomic is failing to resolve your lookup ref [:purchase/id #uuid "24a96e20-f526-4f7f-ba38-4f684caa5607"] to an entity id, but it's hard to tell for sure without the full stack trace.
Also:
• Does the :purchase/id attr have a value for :db/unique?
• Does your db have an entity with #uuid "24a96e20-f526-4f7f-ba38-4f684caa5607" for :purchase/id? #2022-06-0714:55Vishal Gautam@U424XHTGT Here is the full source
https://github.com/Novus-School/novus/blob/master/novus/src/main/novus/superpowers.clj#L21
> Does the :purchase/id attr have a value for :db/unique?
Yep :db/unique :db.unique/identity
> Does your db have an entity with #uuid "24a96e20-f526-4f7f-ba38-4f684caa5607" for :purchase/id?
Yep, if you look at the line 88, it is transacted using that ID#2022-06-0714:59Vishal GautamError Trace
java.lang.IllegalArgumentException : "Cannot resolve key: 24a96e20-f526-4f7f-ba38-4f684caa5607"
in datomic.core.datalog/resolve-id (datalog.clj:330)
in datomic.core.datalog/resolve-id (datalog.clj:327)
in datomic.core.datalog/fn--24749/bind--24761 (datalog.clj:442)
in datomic.core.datalog/fn--24749 (datalog.clj:619)
in datomic.core.datalog/fn--24749 (datalog.clj:399)
in datomic.core.datalog/fn--24599/G--24573--24614 (datalog.clj:119)
in datomic.core.datalog/join-project-coll (datalog.clj:184)
in datomic.core.datalog/join-project-coll (datalog.clj:182)
in datomic.core.datalog/fn--24672 (datalog.clj:289)
in datomic.core.datalog/fn--24672 (datalog.clj:285)
in datomic.core.datalog/fn--24578/G--24571--24593 (datalog.clj:119)
in datomic.core.datalog/eval-clause/fn--25333 (datalog.clj:1460)
in datomic.core.datalog/eval-clause (datalog.clj:1455)
in datomic.core.datalog/eval-clause (datalog.clj:1421)
in datomic.core.datalog/eval-rule/fn--25365 (datalog.clj:1541)
in datomic.core.datalog/eval-rule (datalog.clj:1526)
in datomic.core.datalog/eval-rule (datalog.clj:1505)
in datomic.core.datalog/eval-query (datalog.clj:1569)
in datomic.core.datalog/eval-query (datalog.clj:1552)
in datomic.core.datalog/eval-clause/fn--25333 (datalog.clj:1477)
in datomic.core.datalog/eval-clause (datalog.clj:1455)
in datomic.core.datalog/eval-clause (datalog.clj:1421)
in datomic.core.datalog/eval-rule/fn--25365 (datalog.clj:1541)
in datomic.core.datalog/eval-rule (datalog.clj:1526)
in datomic.core.datalog/eval-rule (datalog.clj:1505)
in datomic.core.datalog/eval-query (datalog.clj:1569)
in datomic.core.datalog/eval-query (datalog.clj:1552)
in datomic.core.datalog/eval-clause/fn--25333 (datalog.clj:1477)
in datomic.core.datalog/eval-clause (datalog.clj:1455)
in datomic.core.datalog/eval-clause (datalog.clj:1421)
in datomic.core.datalog/eval-rule/fn--25365 (datalog.clj:1541)
in datomic.core.datalog/eval-rule (datalog.clj:1526)
in datomic.core.datalog/eval-rule (datalog.clj:1505)
in datomic.core.datalog/eval-query (datalog.clj:1569)
in datomic.core.datalog/eval-query (datalog.clj:1552)
in datomic.core.datalog/qsqr (datalog.clj:1658)
in datomic.core.datalog/qsqr (datalog.clj:1597)
in datomic.core.datalog/qsqr (datalog.clj:1615)
in datomic.core.datalog/qsqr (datalog.clj:1597)
in datomic.core.query/q* (query.clj:664)
in datomic.core.query/q* (query.clj:651)
in datomic.core.local-query/local-q (local_query.clj:58)
in datomic.core.local-query/local-q (local_query.clj:52)
in datomic.core.local-db/fn--27457 (local_db.clj:28)
in datomic.core.local-db/fn--27457 (local_db.clj:24)
in datomic.client.api.impl/fn--13153/G--13146--13160 (impl.clj:41)
in datomic.client.api.impl/call-q (impl.clj:150)
in datomic.client.api.impl/call-q (impl.clj:147)
in datomic.client.api/q (api.clj:393)
in datomic.client.api/q (api.clj:365)
in datomic.client.api/q (api.clj:395)
in datomic.client.api/q (api.clj:365)
in clojure.lang.RestFn.invoke (RestFn.java:486)
in novus.superpowers/owns? (superpowers.clj:135)
in novus.superpowers/owns? (superpowers.clj:134)
in novus.superpowers/eval108861 (/Users/vishalgautam/projects/novus/novus-server/novus/src/main/novus/superpowers.clj:157)
in novus.superpowers/eval108861 (/Users/vishalgautam/projects/novus/novus-server/novus/src/main/novus/superpowers.clj:157)
in clojure.lang.Compiler.eval (Compiler.java:7181)
in clojure.lang.Compiler.eval (Compiler.java:7171)
in clojure.lang.Compiler.eval (Compiler.java:7136)
in clojure.core/eval (core.clj:3202)
in clojure.core/eval (core.clj:3198)
in unrepl.repl$i9hjMxfOQ2IzbCA5TVia2QQEJNg$start$interruptible_eval__25579$fn__25580$fn__25581$fn__25582.invoke (NO_SOURCE_FILE:803)
in unrepl.repl$i9hjMxfOQ2IzbCA5TVia2QQEJNg$start$interruptible_eval__25579$fn__25580$fn__25581.invoke (NO_SOURCE_FILE:803)
in clojure.lang.AFn.applyToHelper (AFn.java:152)
in clojure.lang.AFn.applyTo (AFn.java:144)
in clojure.core/apply (core.clj:667)
in clojure.core/with-bindings* (core.clj:1977)
in clojure.core/with-bindings* (core.clj:1977)
in clojure.lang.RestFn.invoke (RestFn.java:425)
in unrepl.repl$i9hjMxfOQ2IzbCA5TVia2QQEJNg$start$interruptible_eval__25579$fn__25580.invoke (NO_SOURCE_FILE:795)
in clojure.core/binding-conveyor-fn/fn--5772 (core.clj:2034)
in clojure.lang.AFn.call (AFn.java:18)
in java.util.concurrent.FutureTask.run (FutureTask.java:264)
in java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1136)
in java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:635)
in java.lang.Thread.run (Thread.java:833)
#2022-06-0715:00Vishal Gautamex-data error message
{:cognitect.anomalies/category :cognitect.anomalies/incorrect, :cognitect.anomalies/message "processing clause: [?e :customer/id ?cus-id], message: Cannot resolve key: 24a96e20-f526-4f7f-ba38-4f684caa5607"}#2022-06-0707:31Kris CAnyone else cares to share their experience with Datomic (on-prem) in production?#2022-06-0715:47Nedeljko RadovanovicHello everyone 👋
I am currently working on a small project and i need help with something..
I want to stack my queries...
(defn users
([db]
(users db nil))
([db {:keys [cond]}]
(let [q (cond->
'{:find [(pull ?eid [*])]
:where [[?eid :user/created]]}
(map? cond) (concat (map (partial cons '?eid) cond)))]
(map first (d/q q db)))))
Here is test for this:
(let [cond {:user/firstname "dummy-value" :user/region :dummy-value}
q (cond->
'{:find [(pull ?eid [*])]
:where [[?eid :user/created]]}
(map? cond) (concat (map (partial cons '?eid) cond))]
q)
;; ([:find [(pull ?eid [*])]] [:where [[?eid :user/created]]] (?eid :user/firstname "dummy-value") (?eid :user/region :dummy-value)) THIS IS MY RETURN VALUE ( BAD )
;; {:find [(pull ?eid [*])] :where [[?eid :user/created] [?eid :user/firstname "dummy-value"] [?eid :user/region :dummy-value]]} THIS IS RETURN VALUE I WANT ( GOOD )
I am kind of stuck here 😓
Any reference to a documentation or suggestion will help..
Thank you in advance..#2022-06-0715:54favilaYou’re using “map” on a map. I think you meant something like (update :where into (map #(into ['?eid] %)) cond)#2022-06-0715:55favilatry using let only instead of cond-> so that the intermediates are named and it’s less “clever”. I think the mistake will be more obvious.#2022-06-0717:03Nedeljko RadovanovicThank you very much. 😊#2022-06-0720:36Vishal GautamAnyone? 🙏
https://clojurians.slack.com/archives/C03RZMDSH/p1654544893405789#2022-06-0808:02JAtkinsDoes datomic support the concept of optionality? I have a schema where ?opt-property may or may not be present in
[:find ?always ?opt-property
:in %
:where
[:always ?always]
[?always :opt ?opt-property]]
I've tried using or-join to bind ?opt-property to nil when it's not present, but no luck since that's an invalid datalog query.#2022-06-0809:00Ivar RefsdalThere is https://docs.datomic.com/on-prem/query/query.html#get-else{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-06-0809:39favilaYou have to use a sentinel to represent “no value” that isn’t nil#2022-06-0809:41favilaAlternatively it sometimes makes sense to push that concern into the pull projection#2022-06-0816:22pieterbreedI am trying to push a datomic-cloud app to a new datomic-cloud stack, ie this is the first push that I'm trying to perform on this code-base/datomic installation.
$ clojure -A:ion-dev '{:op :push :region "eu-west-1"}'
WARNING: Implicit use of clojure.main with options is deprecated, use -M
{:retry 1}
{:retry 2}
{:retry 3}
{:retry 4}
{:retry 5}
{:retry 6}
{:command-failed "{:op :push :region \"eu-west-1\"}",
:causes
({:message
"Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: ENQWWR2D199SMDXB; S3 Extended Request ID: DRHqS+Bx4XxeHjliYGQ6uCgnJ/kKsTXzeH0ky20Ko9ICYGbgeo+DNAXLHlkDx6TJaLPQb/7r1hY=; Proxy: null)",
:class AmazonS3Exception})}
I've spent the afternoon making sure I've got the latest of everything. Things like datomic cloud list-systems and datomic system list-instances <> work, as in I get results. The CloudFormation stack shows SUCCESS everywhere, I can connect from my local machine to the datomic db etc.
I'm not sure how to debug this ion push issue. I have tried with different auth types; currently with an IAM user, with attached policies for AdministratorAccess, datomic-admin-<system> and an additional policy to grant s3:* on everything on the datomic-gui-<guid> bucket... but still getting this S3 permissions error above.
How can I debug this?#2022-06-0816:28pieterbreedMy IAM creds are being loaded with AWS_* environment variables#2022-06-0818:11Daniel JompheFWIR, envars are not supported by the ion-dev tool.
Try using aws configure to set your ~/.aws/* files correctly.#2022-06-0818:33pieterbreedOk, I did not consider that. Thank you, I’ll test tomorrow and revert#2022-06-0818:34Daniel JompheI couldn't find again in Datomic Cloud's docs where it's documented that we shouldn't use those. I might have learned that from one of the quick setup videos they published. Couldn't find those quickly either.#2022-06-0818:35pieterbreedI must admit; I’ve been through this process once before shortly after ions was announced. I am struggling more this time around.#2022-06-0818:36Daniel JompheDatomic Cloud does indeed assemble together many AWS parts. 🙂 Celebrate every successful step (even though it's often not too hard.) 🙂#2022-06-0819:43Robert A. Randolphhttps://docs.datomic.com/cloud/ions/ions-reference.html#push lists necessary information for supplying AWS credentials information. We've taken note of the difficulty that you're encountering here.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-06-0910:29pieterbreedYeah guys, I don't know... @UEFE6JWG4 Here is my current setup:
• I have an IAM user, that has datomic-admin-<system>-<region> policy and AWS-supplied AdiminstratorAccess (from desperation)
• Additionally I created a policy that contains access to datomic-releseas-... (after encountering https://clojurians-log.clojureverse.org/datomic/2020-07-05/1594042226.483900)
• I've configured this user's credentials with a named profile using aws configureand below will show how/that it works with aws cli.
Below is actual output of a shell session:
$ unset AWS_PROFILE
$ aws s3 ls -<guid>
Unable to locate credentials. You can configure credentials by running "aws configure".
$ export AWS_PROFILE=nette-prod
$ aws s3 ls -<guid>
$ aws s3 cp deps.edn -<guid>/deps.edn
upload: ./deps.edn to -<guid>/deps.edn
$ aws s3 ls -<guid>/
2022-06-09 11:43:50 244 deps.edn
$ aws s3 rm -<guid>/deps.edn
delete: -<guid>/deps.edn
$ clojure -A:ion-dev '{:op :push :creds-profile "nette-prod"}'
WARNING: Implicit use of clojure.main with options is deprecated, use -M
{:retry 1}
{:retry 2}
{:retry 3}
{:retry 4}
{:retry 5}
{:retry 6}
{:command-failed "{:op :push :creds-profile \"nette-prod\"}",
:causes
({:message
"Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: PQMXEV1DW5FMZ8S6; S3 Extended Request ID: 0EGW6n1B8a77BkB8A5rb50RXIrFaKh4qzCXNDzd9++WQYc6HLrUvQnF7Kfg36AMrtGKGv0xb76Y=; Proxy: null)",
:class AmazonS3Exception})}
• I'm not sure what S3 bucket is being accessed here nor how to debug the permissions for that access. Clearly the user configured using the aws named profile has access to the datomic-code... bucket.
• I'm not sure if this operation listing from datomic-releases-... should succeed, I'm not sure if this is the problem or not, but it does not work:
$ aws s3 ls
An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
I'm attaching various things here which might be useful.
$ datomic cloud list-systems
[{"name":"app-20220607",
"storage-cft-version":"Unknown",
"topology":"Unknown"}]
$ datomic system describe-groups app-20220607
[{"name":"app-20220607-Compute-<nrs-and-letters>",
"type":"compute",
"endpoints":
[{"type":"client",
"api-gateway-endpoint":
"https://<numbers-and-letters>.",
"api-gateway-id":"<numbers-and-letters>",
"api-gateway-name":"datomic-app-20220607-client-api"},
{"type":"http-direct",
"api-gateway-endpoint":
"https://<numbers-and-letters>",
"api-gateway-id":"<numbers-and-letters>",
"api-gateway-name":"datomic-app-20220607-ions"}],
"cft-version":"939",
"cloud-version":"9127"}]
$ clojure -Sdescribe
{:version "1.11.1.1113"
:config-files ["/usr/local/lib/clojure/deps.edn" "/home/pieter/.clojure/deps.edn" "deps.edn" ]
:config-user "/home/pieter/.clojure/deps.edn"
:config-project "deps.edn"
:install-dir "/usr/local/lib/clojure"
:config-dir "/home/pieter/.clojure"
:cache-dir ".cpcache"
:force false
:repro false
:main-aliases ""
:repl-aliases ""}
$ cat ~/.clojure/deps.edn
{:aliases {:ion-dev {:deps {com.datomic/ion-dev {:mvn/version "1.0.306"}}
:main-opts ["-m" "datomic.ion.dev"]}}
:mvn/repos {"datomic-cloud" {:url ""}}}
$ cat deps.edn
{:paths ["src" "resources"]
:deps {com.datomic/client-cloud {:mvn/version "1.0.120"}
com.datomic/ion {:mvn/version "1.0.59"}}
:mvn/repos {"datomic-cloud" {:url ""}}}
#2022-06-0915:59pieterbreedI suspect my AWS provided best practice Control Tower guardrails are preventing access to s3 outside of eu-west-1…#2022-06-0917:45Daniel JompheLooks like opening up a ticket with Datomic Cloud would be a good idea. I'd love to learn afterwards what was missing.
I, too, get this:
aws s3 ls
An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
so I suspect it's not the root cause of your issue pushing.
We also use a Control Tower (with all its default guardrails) and it doesn't hinder us.#2022-06-0917:46Daniel JompheI feel like when I'll learn what's the issue's cause, I'll facepalm and feel like I should have been of better help to you, Pieter.#2022-06-0921:49pieterbreedJust having someone to talk to and run ideas by is immensely helpful, thank you. gratitude-thank-you#2022-06-0922:47pieterbreedI think I'm getting closer to cracking it.
The trail of crumbs might be interesting:
• this is a new AWS account, I tried to follow best practice, this means using AWS Control Tower to set up a multi-account AWS structure.
• One of the "guardrails" they provide and semi-suggest (and sounded cool to me) was to limit access to any AWS resources, within certain regions only.
• In our case this meant only AWS resources API calls within eu-west-1
• I believe <s3://datomic-releases-1fc2183a> is in us-east-1, so I've allowed that region in our AWS SCP too.
• This has gotten me further than before. ion-dev {:op :push} fails further down the road now.
#2022-06-0922:47pieterbreed$ clojure -A:ion-dev '{:op :push :creds-profile "nette-prod"}'
WARNING: Implicit use of clojure.main with options is deprecated, use -M
Downloading: com/datomic/ion-http-direct/1.0.46/ion-http-direct-1.0.46.pom from
Downloading: com/datomic/ion-lambda-dispatcher/0.9.34/ion-lambda-dispatcher-0.9.34.pom from
Downloading: com/cognitect/ion-runtime/1.0.20/ion-runtime-1.0.20.pom from
Downloading: com/cognitect/caster/0.9.42/caster-0.9.42.pom from
Downloading: com/cognitect/http-endpoint/1.0.101/http-endpoint-1.0.101.pom from
{:command-failed "{:op :push :creds-profile \"nette-prod\"}",
:causes
({:message
"Failed to read artifact descriptor for com.datomic:ion-resolver:jar:0.9.17",
:class ArtifactDescriptorException}
{:message
"Could not transfer artifact com.datomic:ion-resolver:pom:0.9.17 from/to datomic-cloud (): Unable to execute HTTP request: Connect to [] failed: Connect timed out",
:class ArtifactResolutionException}
{:message
"Could not transfer artifact com.datomic:ion-resolver:pom:0.9.17 from/to datomic-cloud (): Unable to execute HTTP request: Connect to [] failed: Connect timed out",
:class ArtifactTransferException}
{:message
"Unable to execute HTTP request: Connect to [] failed: Connect timed out",
:class SdkClientException}
{:message
"Connect to [] failed: Connect timed out",
:class ConnectTimeoutException}
{:message "Connect timed out", :class SocketTimeoutException})}
$ clojure -A:ion-dev '{:op :push :creds-profile "nette-prod"}'
WARNING: Implicit use of clojure.main with options is deprecated, use -M
{:retry 1}
{:retry 2}
{:retry 3}
{:retry 4}
{:retry 5}
{:retry 6}
{:command-failed "{:op :push :creds-profile \"nette-prod\"}",
:causes
({:message
"Unable to execute HTTP request: Connect to [] failed: Connect timed out",
:class SdkClientException}
{:message
"Connect to [] failed: Connect timed out",
:class ConnectTimeoutException}
{:message "Connect timed out", :class SocketTimeoutException})}#2022-06-0923:17pieterbreedThose last few errors were actually just a misbehaving wifi router. :push worked, :deploy worked. I think I found the well-paved road again...#2022-06-0923:22pieterbreedI've gotten here... {:deploy-status "FAILED", :code-deploy-status "FAILED"} Will continue tomorrow.#2022-06-1007:31pieterbreed{:deploy-status "SUCCEEDED", :code-deploy-status "SUCCEEDED"} I think we can call this "closed" now. Thanks so much for engaging.#2022-06-1010:56Daniel Jomphe🙂 Great! Happy for you, good work!! :thumbsup::skin-tone-3:#2022-06-1010:58Daniel JompheBTW it was the first time I saw those
{:retry 1}
{:retry 2}
{:retry 3}
{:retry 4}
{:retry 5}
{:retry 6}
I know they come from cognitect.anomaly's retry strategy for this kind of error condition. Your router config must have been fixed by now, then. :)#2022-06-0818:34Daniel JompheI couldn't find again in Datomic Cloud's docs where it's documented that we shouldn't use those. I might have learned that from one of the quick setup videos they published. Couldn't find those quickly either.#2022-06-1010:23valtteriHi, I have a two-part question:
• Our client is running Datomic Pro and now they need to implement a full featured search into their application (full-text search, faceting etc.). One obvious idea is to index the data from Datomic to Elasticsearch or similar engine. I was wondering if there are other known good solutions in this space?
• The client originally developed a multi-tenant system. However, years went by and now their biggest concern is "enterprise capabilities" (security especially) and there's pressure to start providing single-tenant option to certain clients. Does Datomic have some kind of a story for going from multi-tenant to single-tenant? I mean in both technical and licensing terms.
Two questions above are related since I'm supposed to architect a solution for search that would also be future proof in single vs. multi-tenant sense. Thanks!#2022-06-1011:15nottmeyregarding your first question: this was the answer I got, when I asked a similar question last time (feed elasticsearch or similar and break down your query)#2022-06-1012:37tatutor your nodes could have local lucene indexes, like xtdb does it... you might take a look at https://github.com/xtdb/xtdb/tree/master/modules/lucene even if it isn't directly usable#2022-06-1013:58ennI’m not aware of anything off-the-shelf to index Datomic to ES. we rolled our own indexer that consumes the transaction log and updates ES.#2022-06-1017:14pyry+1 to "roll your own indexer"#2022-06-1019:39valtteriThanks for all the responses! Local Lucene might be too low-level so I guess indexing to ES seems like a viable way to go forward.
Still need to think about single-tenancy.. Both Datomic and ES feel too large and expensive if there’s a need to spin up a new environment for a single customer.#2022-06-1019:44valtteriOr.. maybe that’s how it is and the price-tag for single-tenancy needs to be set accordingly. :thinking_face:#2022-06-1019:52nottmeyFor search there is also something like https://algolia.com, maybe they have a better pay per use model#2022-06-1411:07dazldalgolia is wildly expensive - we’re using https://typesense.org for now, and it’s a solid piece of engineering.
it’s been on my mind to write up how we’re putting all of this together - graphQL in front of datomic & typesense, and a tx log watcher that invokes updates.{:tag :div, :attrs {:class "message-reaction", :title "ok_hand"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👌")} " 1")}
#2022-06-1411:33nottmey@U3ZUC5M0R nice recommendation, did not know about typesense.
Do you know whether they comply with european data protection law? (because it seems like an US service and our customers want to search in european users data)#2022-06-1411:34dazldworth asking them - if you self host, then clearly it’s ok, but the cloud is different{:tag :div, :attrs {:class "message-reaction", :title "ok_hand"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👌")} " 1")}
#2022-06-1117:23Patrick BrownEDITS: Leaving it all here. Despite the stupidity.
Hey Datomic! What is the typical cause of this error. I’m following the ions tutorial on the datomic site getting my feet wet. Everything was going well until deploy gave me the below. I go and dig around in code deploy and my ec2 instance, but looking at the monitoring tab for the instance shows me nothing I wouldn’t expect to see. In all I don’t see an error message that can tell me where things went wrong. So, how does one debug these things? All I’ve got to go on is…
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems.
Since all I see now is that my instance is healthy, where do I go to find the problem?
EDIT 1: You find your alarms in CloudWatch and check
https://docs.datomic.com/cloud/troubleshooting.html#troubleshooting-ion-deploy before asking.
EDIT 2: I still can’t find anything telling me what’s wrong. Please take a look at that attached error message and help me out with what I’m missing. My alarms are write scaling, but since I’ve used the default datomic supplied values I don’t think it’s relevant.
EDIT 3: Yeah, so there are event logs. In the individual deployments, mine are.
`Event details`
Error code
ScriptTimedOut
Script name
scripts/deploy-validate
Message
Script at specified location: scripts/deploy-validate failed to complete in 300 seconds
This is clearly the application code, since I didn’t change the tutorial code, it’s config, yet everything works at the REPL… I’m lost.
DATOMIC ERRORS = CLOJURE ERRORS = GIRLFRIEND ERRORS = I could tell you what you did wrong, but if you were smart enough to do better, I wouldn’t be mad in the first place.#2022-06-1308:20Christian JohansenWhat version of the Postgresql driver does the latest onprem transactor use? I’m having trouble connecting the transactor to a Postgres 14 instance: “PSQLException: The authentication type 10 is not supported”. I’m no postgres expert, but I think that this means that the client (the transactor) isn’t capable of scram-sha-256 authentication, is that correct?#2022-06-1311:06Ivar RefsdalVersion 9.3-1102 as far as I can tell:
$ unzip -l datomic-pro-1.0.6397.zip | grep postgresql
592322 2022-04-01 21:54 datomic-pro-1.0.6397/lib/postgresql-9.3-1102-jdbc41.jar
#2022-06-1311:06Christian JohansenRight, thanks 👍#2022-06-1311:06Christian JohansenI ended up using md5 passwords in postgres and got things working#2022-06-1311:06Christian JohansenBut it would be nice to be able to use scram{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-06-1317:13JohnJyou can swap out the driver for the latest one#2022-06-1512:41Joe Lane@U9MKYDN4Q JohnJ is right, you can swap the driver for a newer one.
If you are using the PG Setup script packaged w/ Datomic and targeting Postgres 14 you may need to modify it. IIRC there was a change made in Postgres 14 which our script hasn't yet been updated for.
If you're aware of using scram-sha-256 auth I'm sure you'll be able to figure out how to modify the script to accomplish your needs.
Hope that helps#2022-06-1512:44Christian JohansenThanks! I ended up changing the password mechanism in psql, which was the least effort at this point 🙂#2022-06-1512:46Joe Laneplease reach out if you have other questions, have fun!#2022-06-1512:50Christian JohansenThanks 🙂#2022-06-1418:19Benjaminis it possible to change from unique/value to unique/identity ?#2022-06-1418:22favilaYes. This only alters entity tempid resolution behavior.#2022-06-1418:22favila(i.e. “upsert”)#2022-06-1418:25Benjaminnice#2022-06-1418:27Daniel JompheFirst time I inspect closely the CloudFormation changes between versions.
Datomic Cloud 973-9132 is a massive simplification over the previous version (at the user-visible CFN level).
Production compute went from e.g. 4k json lines of CFN to 1k lines.#2022-06-1418:27Daniel Jomphehttps://docs.datomic.com/cloud/changes.html#2022-06-1511:49Daniel JompheWas true but https://clojurians.slack.com/archives/C03RZMDSH/p1655244218799419?thread_ts=1655233135.326879&cid=C03RZMDSH.#2022-06-1418:38danierouxHow did I miss this release! 1.11.1 all around, this is a good Tuesday 😊{:tag :div, :attrs {:class "message-reaction", :title "relaxed"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("☺️")} " 1")}
#2022-06-1418:39danierouxAlso, IonApiIntegrationId that I need. But 1.11.1 is the happy maker.#2022-06-1418:57uwoNot sure if this is the best place to report a typo. From https://docs.datomic.com/on-prem/operation/monitoring.html#transactor-metrics
The description for LogIngestBytes is "in-memory size of log when a database size". Judging from the description of LogIngestMsec, I'm guessing you want s/size/start#2022-06-1418:58Daniel Jomphe#2022-06-1419:01Daniel JompheThis feels like a breaking change.
I'd have loved to see it written with this other change:#2022-06-1422:02Robert A. Randolph@U0514DPR7 can you confirm the template URL that you're using to update please?#2022-06-1422:03Robert A. RandolphI believe that you may be seeing an issue with that parameter due to an incorrect template being listed on that page. It is fixed, and you should be using https://s3.amazonaws.com/datomic-cloud-1/cft/973-9132/datomic-compute-973-9132.json to update.#2022-06-1511:48Daniel JompheThanks @UEFE6JWG4, the new version is good.#2022-06-1512:15joshkhhas there been any progress in the area of a packaged solution for Datomic Cloud database restore-from-backup, either community supported or an official solution from Cognitect?
we were investigating https://github.com/fulcrologic/datomic-cloud-backup but the project is no longer active, and last we checked it did not support tuples.{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 6")}
#2022-06-1512:48onetomaccording to https://docs.datomic.com/cloud/transactions/transaction-functions.html#calling
transaction function calls within transactions should be represented as lists.
if i use vectors instead, the call still works, just like with tx fns, which are called via their indents, like :db/add, db/retractEntity...
can i rely on this undocumented behaviour in the future?
i would rather use vectors, because they are slightly more concise to construct programmatically,
but also because tooling wouldn't think, that it's a faulty function call, because it's missing its 1st db argument.#2022-06-1512:53favilaThis feels like a very stylistic choice. On-prem documentation has only ever mentioned vectors and I’ve only ever used vectors. I’m surprised to see lists in docs.#2022-06-1512:54favilaso I would be extremely surprised if vectors one day stopped working#2022-06-1512:54favilaI think dispatch is entirely driven by the item in the first position#2022-06-1513:02onetomso it just cares about whether the head of the seq is a keyword? or qualified-symbol?.
that sounds reassuring.
the problem with list is that we would need to jump thru some small hoops, if we want to generate txs, eg:
(let [some-tx-fn-arg 123]
(d/transact conn {:tx-data [(list `some-tx-fn some-tx-fn-arg)]))
or
`[(some-tx-fn ~some-tx-fn-arg)]
or
[`(some-tx-fn ~some-tx-fn-arg)]
but this wouldn't work:
[(`some-tx-fn ~some-tx-fn-arg)]
which i myself do understand, but not everyone on the team is so fluent in clojure.
they would however perfectly understand
[[`some-tx-fn some-tx-fn-arg]]
#2022-06-1513:57timoAnyone using Datomic with OracleDB? Our DB is ever increasing. Backups on the other side are comparatively small. I am using gc-storage now but it doesn't change a thing. Anyone knows something that keeps the OracleDB from freeing up space?#2022-06-1514:05dazldI’d say, try restoring your backup to another db, and compare?#2022-06-1514:05dazldmaybe there’s something that jumps out#2022-06-1514:06timothat's a good idea, let's see if this can happen at my company.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-06-1514:07dazldworst that happens is you exercise your backups I guess 🙂{:tag :div, :attrs {:class "message-reaction", :title "laughing"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😆")} " 1")}
#2022-06-1514:09timo#2022-06-1517:14favilaNot an oracle expert, but apparently they can experience fragmentation. Google “oracle segment advisor”{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-06-1617:34Daniel JompheFollowing the upgrade to Datomic Cloud 973-9132, when pushing ions, :dependency-conflicts are reported as if the cluster ran on e.g. clojure 1.10, which is certainly false.
From experience, this is a false (static) signal due to the absence of a new accompanying release of ion-dev, right?
So it's safe to assume it's OK to use clojure 1.11 even though it tells me it's overwritten back to 1.10, right?#2022-06-1617:34Daniel Jomphe#2022-06-1617:35Daniel Jomphe#2022-06-1617:42Joe Lane@U0514DPR7, using an older ion-dev will show 1.10 as the dep conflict. The cloud compute nodes are using Clojure 1.11.0 in the latest release.#2022-06-1617:43Daniel JompheThanks for the confirmation, Joe.#2022-06-1619:19zakkorhaving trouble connecting to datomic running in docker
docker run -p 4334:4334 --rm -t datomic
Launching with Java options -server -Xms1g -Xmx1g -Ddatomic.printConnectionInfo=true
Starting datomic:<DB-NAME>, storing data in: data ...
System started datomic:<DB-NAME>, storing data in: data
(d/create-database "datomic:")
=> org.h2.jdbc.JdbcSQLException: Connection is broken: "java.net.ConnectException: Connection refused (Connection refused): 0.0.0.0:4335" [90067-171]#2022-06-1619:20zakkorIf I run the container with --net host however, it works.
I have no idea why. Is there any gotcha I'm missing when it comes to datomic in docker?#2022-06-1619:33zakkorI think I've made a bit of progress, the 4335 port wasn't published.
I've made sure to publish all ports using: docker run -p 4334:4334 -p 4335:4335 -p 4336:4336 --rm -t datomic
Now I get this error instead:
main] WARN datomic.coordination - {:event :coord/lookup-endpoint-failed, :pid 196831, :tid 1}
java.util.concurrent.ExecutionException: org.h2.jdbc.JdbcSQLException: Remote connections to this server are not allowed, see -tcpAllowOthers [90117-171]#2022-06-1620:05zakkorCurrent solution: stop using docker. what a mess#2022-06-1620:20Leaf GarlandYou're almost there, the H2 server used for local dev only allows https://docs.datomic.com/on-prem/configuration/configuring-embedded-storage.html#local-dev-convenience. Follow the instructions in that page to set passwords and enable remote access.#2022-06-1620:22Leaf GarlandThe reason it works with host networking is because that puts your docker container on the same host, so the default of local connections only still works.#2022-06-1711:33tvaughanUsing 0.0.0.0 as the Datomic host IP address seems suspect to me #2022-06-1714:37favilaThe latest datomic on-prem supports JDK17 (I think via an artemis upgrade). However we can’t get the analytics (presto server) product to run on 17. Is that known and expected or are we doing something wrong?#2022-06-1714:38tcrawleyit fails with:
2022-06-17T14:09:55.105Z INFO main com.google.inject.Guice An exception was caught and reported. Message: java.lang.NoClassDefFo
undError: Could not initialize class com.google.inject.internal.cglib.core.$MethodWrapper
java.lang.IllegalStateException: Unable to load cache item
at com.google.inject.internal.cglib.core.internal.$LoadingCache.createEntry(LoadingCache.java:79)
at com.google.inject.internal.cglib.core.internal.$LoadingCache.get(LoadingCache.java:34)
at com.google.inject.internal.cglib.core.$AbstractClassGenerator$ClassLoaderData.get(AbstractClassGenerator.java:119)
at com.google.inject.internal.cglib.core.$AbstractClassGenerator.create(AbstractClassGenerator.java:294)
at com.google.inject.internal.cglib.reflect.$FastClass$Generator.create(FastClass.java:65)
at com.google.inject.internal.BytecodeGen.newFastClassForMember(BytecodeGen.java:258)
at com.google.inject.internal.BytecodeGen.newFastClassForMember(BytecodeGen.java:207)
at com.google.inject.internal.ProviderMethod.create(ProviderMethod.java:69)
at com.google.inject.internal.ProviderMethodsModule.createProviderMethod(ProviderMethodsModule.java:327)
at com.google.inject.internal.ProviderMethodsModule.getProviderMethods(ProviderMethodsModule.java:135)
at com.google.inject.internal.ProviderMethodsModule.configure(ProviderMethodsModule.java:105)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:347)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:356)
at com.google.inject.spi.Elements.getElements(Elements.java:104)
at com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:137)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:105)
at com.google.inject.Guice.createInjector(Guice.java:87)
at io.airlift.bootstrap.Bootstrap.initialize(Bootstrap.java:276)
at io.prestosql.server.Server.doStart(Server.java:111)
at io.prestosql.server.Server.lambda$start$0(Server.java:73)
at io.prestosql.$gen.Presto_348____20220617_140950_1.run(Unknown Source)
at io.prestosql.server.Server.start(Server.java:73)
at io.prestosql.server.PrestoServer.main(PrestoServer.java:38)
Caused by: java.lang.NoClassDefFoundError: Could not initialize class com.google.inject.internal.cglib.core.$MethodWrapper
at com.google.inject.internal.cglib.core.$DuplicatesPredicate.evaluate(DuplicatesPredicate.java:104)
at com.google.inject.internal.cglib.core.$CollectionUtils.filter(CollectionUtils.java:52)
at com.google.inject.internal.cglib.reflect.$FastClassEmitter.<init>(FastClassEmitter.java:69)
at com.google.inject.internal.cglib.reflect.$FastClass$Generator.generateClass(FastClass.java:77)
at com.google.inject.internal.cglib.core.$DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at com.google.inject.internal.cglib.core.$AbstractClassGenerator.generate(AbstractClassGenerator.java:332)
at com.google.inject.internal.cglib.core.$AbstractClassGenerator$ClassLoaderData$3.apply(AbstractClassGenerator.java:96)
at com.google.inject.internal.cglib.core.$AbstractClassGenerator$ClassLoaderData$3.apply(AbstractClassGenerator.java:94)
at com.google.inject.internal.cglib.core.internal.$LoadingCache$2.call(LoadingCache.java:54)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at com.google.inject.internal.cglib.core.internal.$LoadingCache.createEntry(LoadingCache.java:61)
at com.google.inject.internal.cglib.core.internal.$LoadingCache.get(LoadingCache.java:34)
at com.google.inject.internal.cglib.core.$AbstractClassGenerator$ClassLoaderData.get(AbstractClassGenerator.java:119)
at com.google.inject.internal.cglib.core.$AbstractClassGenerator.create(AbstractClassGenerator.java:294)
at com.google.inject.internal.cglib.reflect.$FastClass$Generator.create(FastClass.java:65)
at com.google.inject.internal.BytecodeGen.newFastClassForMember(BytecodeGen.java:258)
at com.google.inject.internal.BytecodeGen.newFastClassForMember(BytecodeGen.java:207)
at com.google.inject.internal.ProviderMethod.create(ProviderMethod.java:69)
at com.google.inject.internal.ProviderMethodsModule.createProviderMethod(ProviderMethodsModule.java:327)
at com.google.inject.internal.ProviderMethodsModule.getProviderMethods(ProviderMethodsModule.java:135)
at com.google.inject.internal.ProviderMethodsModule.configure(ProviderMethodsModule.java:105)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:347)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:356)
at com.google.inject.spi.Elements.getElements(Elements.java:104)
at com.google.inject.spi.Elements.getElements(Elements.java:97)
at io.airlift.configuration.ConfigurationFactory.registerConfigurationClasses(ConfigurationFactory.java:164)
at io.airlift.bootstrap.Bootstrap.initialize(Bootstrap.java:223)
... 5 more
#2022-06-1714:50Joe LaneBad news @U09R86PA4 @U06SGCEHJ, Trino itself doesn't even yet appear to support JDK17 https://trino.io/docs/current/installation/deployment.html?highlight=java#java-runtime-environment#2022-06-1714:51favilafollow up q: is datomic analytics using presto or trino now?#2022-06-1714:52Joe LaneThe Datomic Analytics connector is intended for use with Presto 348. It's the version packaged w/ the distribution.#2022-06-1714:53favilawhy the mention of trino then?#2022-06-1714:53Joe LaneBecause you'll never be able to find the "Presto 348" docs. They modified the branding and SEO in place in the transition from "Presto" to "Trino".#2022-06-1714:54tcrawleyso Presto 348 is really Trino?#2022-06-1714:58Joe Lane• https://trino.io/blog/2020/12/27/announcing-trino.html
• In the transition they introduced many breaking changes
• https://trino.io/docs/current/release/release-348.html is now found on the Trino webpage
• It appears they've taken down the docs as-of Presto 348. I thought they used to version their docs but it appears that isn't true anymore.#2022-06-1714:58favilaSo this code is https://github.com/trinodb/trino/tree/348 and it’s released literally 12 days before the rebrand.#2022-06-1715:00Joe LaneOff the top of my head I believe that is true.#2022-06-1715:00favilaOur confusion stems from the documentation/readmes which links to https://prestodb.io/ and calls it “presto”, but that’s just because distinction didn’t exist at analytics release and the docs weren’t updated?#2022-06-1715:00favilawhich was pre-fork and pre-rebrand.#2022-06-1715:01Joe LaneWhos documentation? Ours?#2022-06-1715:02favilayes#2022-06-1715:02Joe Lanefacepalm Please Hold.#2022-06-1715:02favila$ cat README.txt
Presto is a distributed SQL query engine.
Please see the website for installation instructions:
#2022-06-1715:03favilahttps://docs.datomic.com/on-prem/analytics/analytics-concepts.html does link to http://trino.io now though. But it calls it “presto SQL” (which granted is the old trino name)#2022-06-1715:03favilaEither way, I think our original Q is solved: doesn’t support Java 17 because underlying lib doesn’t#2022-06-1715:03Joe LaneThat link in the readme i believe used to link to the right place, however, it now redirects to https://prestodb.io/ which is incorrect.#2022-06-1715:04favilasituation is confusing because confusing#2022-06-1715:05Joe LaneThanks for understanding.#2022-06-1715:12Joe Lane@U09R86PA4 RE: cat README.txt, can pwd for me? Trying to figure out who's readme that is.#2022-06-1715:13Joe LaneAlso, we're now aware of https://trino.io/episodes/36.html{:tag :div, :attrs {:class "message-reaction", :title "clap"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👏")} " 1")}
#2022-06-1715:21favilapresto-server#2022-06-1715:21tcrawleyInside the datomic-pro distribution#2022-06-1715:38Joe LaneI see what happened. We zipped up 348 when it was new and put it in S3. This is a violation of "cool url's don't change" caused by the rebrand.#2022-06-1715:46plexusTo be precise, Datomic bundles io.prestosql/presto-server 348. This product no longer exists, it was forked and superceded by com.facebook.presto/presto-server and io.trino/trino-server.
I asked a while ago about upgrade plans but didn't get any response, so it's also not clear yet which side of the fork they will continue with.#2022-06-1715:46plexusThey did strip out non-datomic plugins from Presto, if you want those you can find the original distribution here https://repo.maven.apache.org/maven2/io/prestosql/presto-server/348/#2022-06-1715:47Joe LaneWe will continue with Trino{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2022-06-1715:45jdkealywhat’s the best way to do bulk updates in memory constricted environments ?#2022-06-1715:46jdkealyi presume the worst way is to do a
(->> query
(map #(transact [{:db/id %, other:attr “xyz”}])#2022-06-1715:55jdkealyi’ve got a pod with 2 gigs of ram, it seems to perform ok except in the case of bulk updates#2022-06-1715:56jdkealyi guess async writes is the way to go… if you deref the results, that’s where you get into trouble ?#2022-06-1900:41souenzzo(let [vs [...]
tx-data (async/chan)
void (async/chan (async/sliding-buffer 1))
n 5]
(async/pipeline-blocking n
void
(map (fn [tx]
@(d/transact conn tx-data)))
tx-data)
(doseq [i vs]
(async/>!! tx-data i)))#2022-06-1900:41souenzzoyou can play with the value of n#2022-06-1900:42jdkealydon’t you want to not deref the transact ?#2022-06-1900:43jdkealyor does that help create back pressure ?#2022-06-1902:46souenzzoNot sure.
But in my head, is important to deref, to make sure that you will have only 5 transacts running concurrently#2022-06-2110:28Drew VerleeWhy use core async here? I'm probably missing something because i don't understand blocking pipeline.#2022-06-2117:08souenzzoit gives you more control over your threads. the total number of, the coordination, etc.#2022-06-2201:32Patrick BrownWhat exactly is the difference between {:server-type :ion} and {:server-type :cloud}?
I'm turned around, because I'm querying and transacting locally with the server-type as ion? I can't help but feel I've got my system wired up wrong, perhaps so wrong it's working. CHEERS!#2022-06-2210:47jcfDoes this answer from Stu help? https://ask.datomic.com/index.php/590/when-would-i-want-to-use-server-type-cloud#2022-06-2210:51Patrick BrownYESSS! That was clear and simple. Thanks @U06FTAZV3{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-06-2212:04Robert A. RandolphThe information is also listed here: https://docs.datomic.com/cloud/ions/ions-reference.html#server-type-ion datomic#2022-06-2213:38Jakub Holý (HolyJak)Hello! Why can't I download the ions library? I just get
> Downloading: com/datomic/ion/1.0.59/ion-1.0.59.pom from datomic-cloud
> Downloading: com/datomic/ion/1.0.59/ion-1.0.59.jar from datomic-cloud
> Error building classpath. Could not find artifact com.datomic:ion:jar:1.0.59 in central (https://repo1.maven.org/maven2/)
so it seems like it does not find it in the repo. I suppose that my authentication is fine because before I was missing it in my ~/.m2/settings.xml and then clj was failing with "ExceptionInfo: Unexpected error downloading artifact from datomic-releases-1fc2183a {:bucket "datomic-releases-1fc2183a", :path "maven/releases/com/datomic/ion/1.0.59/ion-1.0.59.pom", :reason :cognitect.anomalies/fault}", which is understandable. So auth is fine but the .pom and/or .jar are not in the s3 bucket? My deps.edn has
:mvn/repos {"datomic-cloud" {:url ""}}
:deps {org.clojure/clojure {:mvn/version "1.11.1"}
com.datomic/dev-local {:mvn/version "1.0.243"}
com.datomic/ion {:mvn/version "1.0.59"}
...
🙏#2022-06-2218:27jcfI vaguely remember having to grant access to an IAM user when pulling down Datomic dependencies from that bucket a while back.
It might be worth double checking your AWS credentials allow you to read from that bucket.#2022-06-2218:28jcfThis might be relevant: https://ask.datomic.com/index.php/546/could-not-find-artifact-com-datomic-ion-jar-0-9-48-in-central#2022-06-2220:11Quentin Le GuennecDoes datomic support reciprocal relationship? If so, where can I find documentation on it?#2022-06-2220:13favilaDo you mean reverse lookup of a reference?#2022-06-2220:15favilaall ref types can be followed backward in entity-map and pull expressions using _ in the name part, e.g. :foo/_bars is the reverse of :foo/bar, i.e. on a bar, it gives you all foo that reference it via :foo/bars#2022-06-2220:16favilaIn datalog queries there’s no difference. Whether [?foo :foo/bar ?bar] is “backwards” or “forwards” depends on what index is used.#2022-06-2220:17favilaThis is cloud: https://docs.datomic.com/cloud/query/query-pull.html#reverse-lookup This is on-prem: https://docs.datomic.com/on-prem/query/pull.html#reverse-lookup#2022-06-2222:11Quentin Le GuennecThank you, exactly what I was looking for.#2022-06-2411:44Patrick BrownWhat is datomic.ion.lambda.api-gateway/ionize? I can't find any docs. It looks like just what I need to ionize a ring handler. It also appears from doing a code search for it on github that it is intended for use with a proxy port. Is this still a viable route in 2022? How did people find out about this function and where is it documented?
Anybody who has made a ring handler an ion before, how did you do it, HTTP direct? Lambda? I'm having a rough time getting an app that works with my datomic cloud database remotely to work as an Ion inside the VPC. CHEERS AND THANKS!#2022-06-2412:01Alex Miller (Clojure team)I think the relevant doc is here if that helps: https://docs.datomic.com/cloud/ions/ions-reference.html#ionize#2022-06-2412:29Patrick BrownYes! Thanks Alex for the link! I missed that, because I skipped over that section. It is titled 'Older Versions of Datomic Cloud' I am using the latest Datomic cloud and a production topology. This does not appear to apply to me, but in searching for some source of people doing what I'm trying to do it appeared to be the path most traveled.
Alex, if you were going to interface with Datomic cloud via a ring handler, how would you deploy your app? I initially wanted to use Ions because I liked the idea of off-loading the work of HA and easy deployment. I have an app that queries and transacts directly with my storage stack on a datomic cloud VPC from outside, but I'm running into major issues getting it to deploy as an Ion. In your expert opinion, what's the best path to move forward? I know it's a subjective question, but I very much need some solid advice. CHEERS!#2022-06-2414:08Joe Lane@pat561 From our https://docs.datomic.com/cloud/ions/ions-reference.html (which you should start reading from the top) we have section on https://docs.datomic.com/cloud/ions/ions-reference.html#web-ion which doesn't use ionize .
From there we link to the https://github.com/Datomic/ion-starter project, which has an https://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter/http.clj#L16 without ionize.#2022-06-2414:18Patrick BrownI hope I'm not being too dense... HTTP Direct essentially does the work that in prior versions was ionizing a lambda function, so deploying a handler as an Ion is done via :http-direct my-ns/my-handler. So for my specific use case, I'd like to call http-direct, and lambdas don't apply.#2022-06-2414:25Joe LaneI don't understand what problem you're having. Forget about ionize, it's unrelated to :http-direct. Have you gone through the ion-starter project I linked to above?#2022-06-2414:35Patrick BrownYes, I've deployed it, then made some tweaks and redeployed. Everything is working from the starter project in my cloud stack just fine. My problem came when I tried to move to my application. I had incompatibilities causing failed deploys and the error I got mixed my mind up on where exactly they came from. I may be sorted. I hope I no longer have a knowledge problem, but a code one. It'll take some significant refactoring before I can go from the starter project working to knowing if I'm understanding things right for my specific needs. I do appreciate the help. Things are getting clearer.#2022-06-2514:43Patrick Brown@U0CJ19XAM & @U064X3EF3 Thanks for your support. I was right proper turned around. Once I pin-pointed to issue with my application code, Deploy was as easy as should be expected.{:tag :div, :attrs {:class "message-reaction", :title "partying_face"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🥳")} " 1")}
#2022-06-2412:34Patrick BrownHOW TO DEBUG ION DEPLOY ISSUES WITH APPLICATION CODE?
Sorry, if I'm monopolizing the space here, but I just have so many questions.
When you are trying to troubleshoot deployment problems for Ions, how do you ensure that you get timely and instructive errors? Right now, I go digging through my CloudWatch logs and look for the errors in my specific instance, then if I can make sense of them, I make changes and repeat. This is obviously inefficient and frustrating, but I don't know of a better way. Any ideas or correction on my approach are appreciated! CHEERS!#2022-06-2414:08Joe Lane@pat561 From our https://docs.datomic.com/cloud/ions/ions-reference.html (which you should start reading from the top) we have section on https://docs.datomic.com/cloud/ions/ions-reference.html#web-ion which doesn't use ionize .
From there we link to the https://github.com/Datomic/ion-starter project, which has an https://github.com/Datomic/ion-starter/blob/master/src/datomic/ion/starter/http.clj#L16 without ionize.#2022-06-2522:44jdkealyI started using logback in my clojure application and lots of datomic logs started appearing. Is there any way to ignore them completely ?#2022-06-2522:50emccuelogback has a configuration file https://logback.qos.ch/manual/configuration.html#2022-06-2522:51emccuehttps://stackoverflow.com/questions/47397442/disable-the-log-from-specific-class-jar-via-logback-xml#2022-06-2522:59jdkealyoh beautiful that worked#2022-06-2816:38icemanmeltingGuys, quick question, I am testing datomic’s performance in terms of throughtput when ingesting data. I am running the transactor, peer and the pg instance inside docker, each with its own container. What happens is that the transactor just keeps dying, without giving any error whatsoever.#2022-06-2912:50chrisblomGiven that there no logs I suspect the transactor gets killed by OOM killer, how much memory is available in de container? Can you inspect the container after the process is killed?#2022-06-2912:54chrisblomNote that the JVM uses more memory than just heap memory, so at 3.8GB heap the total memory used by the JVM is more.
If you are also running peer and postgres maybe the problem is that you are just running out of memory and the most memory hungry process gets killed.#2022-06-2913:42icemanmeltingI have found the issue, and you are going to laugh at me kappa#2022-06-2913:43icemanmeltingIn my desperation, I turned the heap max value to 8G, and the container had a max of 8G as well, so basically the jvm was trying to increase the heap and the process jut died#2022-06-2913:43icemanmeltingI went ahead and increased the transacto’s container to 12 G, and it has been running without issue for 15 hours 😄#2022-06-2816:39icemanmeltingI have given it th 4GB of heap max as per production recommendations, and what i have noticed is that it usually dies at around 3.8 GB used reported by docker#2022-06-2816:39icemanmeltingthe part that puzzles me most, is the lack of information in the logs, not docker logs, the logs inside the container itself#2022-06-2916:02JAtkinsI guess this is a semi-periodic check - do you guys plan to adapt Datomic Console for Datomic Cloud? Or, in particular Dev Local? I’d find it quite useful it it is around.#2022-06-3003:07souenzzoas far I understand, tap + REBL should be the final solution for "datomic cloud console"#2022-06-3014:49Robert A. RandolphWe do not have plans for Console for Datomic Cloud currently, and suggest using REBL as mentioned above.#2022-06-2916:19heliosDatomic documentation is suggesting to use :db/ident for enums (https://docs.datomic.com/on-prem/schema/schema-modeling.html#enums).
However, when talking about idents (https://docs.datomic.com/on-prem/schema/identity.html#idents) it does say that
> Idents should not be used as unique names or ids on ordinary domain entities. Such entity names should be implemented with a domain-specific attribute that is a unique identity.
This last advice is also backed by the fact that idents are in a special cache always in memory, and that this place is permanent: if you rename an ident the old one still works, there's no retracting from this special cache. Which doesn't seem like a good place to store domain knowledge.
Is this a case in which these advice were given at a different time and one is superseded? What are you folks doing in this case?
An example of my use case would be to store the phase of an object, :object/phase could be :phase/entry :phase/storage :phase/exit. Should i use a ref and db/idents or a keyword?#2022-06-2916:29ennI have always understood this advice:
> Idents should not be used as unique names or ids on ordinary domain entities.
as referring to non-enumerated unique identifiers. I.e., user IDs or similar, where the set of potential IDs is open.
I would personally use idents for the :object/phase use case you describe.#2022-06-2917:07JohnJIf you don't mind having application code verifying valid keywords(which you should anyway) I would go with keywords, for small pools of enumeration the performance impact won't be noticeable#2022-06-2917:40favilaA rule of thumb: if it’s something the developer creates as part of their own maintenance of the data model, you can use an ident; but idents ought not to be created by users#2022-06-2917:40favilaidents are part of the domain model not domain data{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 2")}
#2022-06-2918:13JohnJby "you can use and ident" you imply a keyword is fine too? how helpful are the constraints of enumerated idents in the real world?#2022-06-2918:14favilaThere are no ref constraints except the ones you impose yourself#2022-06-2918:16favilaBenefits of ident-entities: you can change their name (old idents continue to work even after retracted), you can add metadata (e.g. membership in an enum, a :db/doc), you get a VAET index.#2022-06-2918:17JohnJtrue, no ref constraints but the ident do have to exist#2022-06-2918:17favilayeah, that’s true#2022-06-2918:17JohnJ(have been created beforehand)#2022-06-2918:18favilaattribute predicates can now give you some safety there#2022-06-2918:18favilafor ordinary values#2022-06-2918:19JohnJquerying is also a bit more tedious with idents#2022-06-2918:20favilayou know that [?e :ref-attr ?ident] works if :attr is statically known?#2022-06-2918:21favilaand [?ident …] I think always works#2022-06-2918:22favilabut yes, there are still situations where the indirection makes it a little more work#2022-06-2918:22favilaand to get keywords as values in pull expression results requires an xform{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-06-2918:24JohnJhmm have to check the app, can't remember#2022-06-3017:25Vishal GautamFor datomic cloud, what is the size limit of the LRU object cache?#2022-06-3019:20camdezRelatedly…for Datomic On-prem, is there a way to check the (current) size of the object cache?#2022-07-0113:48bhurlowthe datomic.process-monitor logs output ObjectCacheCount#2022-07-0113:48bhurlownot sure about mb size#2022-07-0114:19camdezHmmm…thanks! I don’t see that value in my process-monitor output, but this is a quite old build of Datomic, so perhaps it was added later.#2022-07-0512:51favilafor datomic on-prem, it is always half of Xmx (if unspecified) or the system property value you specified#2022-07-0515:04camdezThanks, @U09R86PA4. Is it possible to know how much of that allocated cache space has been used though?#2022-07-0515:04favilaAll of it is used?#2022-07-0515:04camdezAs soon as it is allocated it gets fully used?#2022-07-0515:05favilaIt’s not heap available to the application#2022-07-0515:05favilaWhether it gets full enough to start evicting entries is another matter#2022-07-0515:05camdezI understand it’s reserved for use as object cache, I just mean is it possible to know how full it is?#2022-07-0515:07favilaI only know about metrics for hit rate and entry count #2022-07-0515:07camdezCool. Thank you!#2022-07-0515:08favilaPresumably something can tell you mb (it’s a guava cache underneath) but only small dbs will never get full#2022-07-0321:26Quentin Le GuennecHello, it seems like I'm getting tx-data from the previous transaction in reports from tx-report-queue. Is it necessarily true that one transaction will produce one and only one push on the tx-report-queue , with exactly and only the transacted data in the report?#2022-07-0322:03Quentin Le Guennecnvm, the issue was on my side.#2022-07-0507:35Quentin Le GuennecHello, is there a way to get the transaction data (db-before, db-after, tx-data, etc) as when a given entity was last transacted in the database?#2022-07-0510:59jcfTo get the database values: https://docs.datomic.com/client-api/datomic.client.api.html#var-as-of#2022-07-0510:59jcfYou can query for the transactions associated with a particular entity and get each transaction instant.#2022-07-0510:59jcfThose instants will allow you to get the database values.#2022-07-0511:02jcfSee also: https://docs.datomic.com/on-prem/time/filters.html#2022-07-0511:02jcfMixing cloud and on-prem above. 🙈 #2022-07-0516:47souenzzoFind the "given entity was last transacted in the database"
Well, an entity has many attributes. Each attribute has its last transaction.
For now, we will ignore the attributes and get the "newer one"
'[:find (max ?tx)
:in $ ?e
:where
[?e _ _ ?tx]]
Now we can use (d/as-of db tx), as @U06FTAZV3 said, to get the :db-after
To get :db-before is a little tricky: (d/as-of db (dec tx))
The :tempids values is impossible to "recreate"
The tx-data value is tricky too:
'[:find ?e ?a ?v ?tx ?op
:in $ ?tx
:where
[?a :db/ident]
[?e ?a ?v ?tx ?op]]
Where $ is the db-after{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-07-0511:17folconI'm almost certain the answer to this is going to be no, but is there a version of datomic that is embeddable? If I was writing an application that was deployed to end-users for example? Is there some flavour of datomic that I can bundle into a dmg?
I have an application that I've been writing that needs to run both in the cloud and as a local instance. I've been handling the local version via datascript, but now that requirements have shifted, I'm wondering if swapping to datomic makes sense. If I do end up doing that, it would be nice to be able only have to worry about one type of db.#2022-07-0512:50favilaI think datomic-free license allows embedding and redistribution, but check.#2022-07-0512:50faviladatomic-free is a very old datomic on-prem though#2022-07-0512:50favilait’s lacking many features#2022-07-0513:13folconHmm, it doesn't say anything about embedding, so not sure if that's an ok or not...
https://www.datomic.com/datomic-free-edition-license.html#2022-07-0513:14folconHmm, ok, what about dev-local, I can't seem to find the license for that, I'm assuming it's a no by default, but would be good to check.#2022-07-0513:17favilaI think section 3 and 4 together allow datomic-free embedding with restrictions#2022-07-0513:17favilaNAL and not speaking for Cognitect but I believe that was the intent#2022-07-0513:17favilaworth asking directly if you are unsure#2022-07-0513:18faviladonno about dev-local#2022-07-0513:19folconYea, it would be good if someone from cognitect can answer this 😃...
If the response is a no, that's fine. Just want some clarity 😃...#2022-07-0513:26folconHmm, ok, this thread[0] is a little concerning, on the one hand it seems like what you're suggesting with datomic free may be acceptable behaviour, Joe[1] says we need something to include as a dependency for our libs and Marshall doesn't tell him that he's not allowed to do that. So this is perhaps viable.
On the other hand, it seems like there's no further movement on this, so if I proceed here I'll have to accept some divergence between my embedded and cloud / ions versions, which is frustrating.
It may be worthwhile seeing if datascript may be the answer to the local embedded approach and then see how much api difference exists between it and cloud / ions.
Alternatively datascript written straight to s3 or dynamodb may also be viable.
Thanks for the suggestion @U09R86PA4 =)...
• [0]: https://forum.datomic.com/t/datomic-free-being-out-phased/1211/9
• [1]: https://forum.datomic.com/t/datomic-free-being-out-phased/1211/7#2022-07-0900:14HuahaiThere are several alternatives that are embeddable, e.g. Datalevin, Datahike, Asami, etc.#2022-07-1215:11folconSure, though with those there's some question around what difference of api exist?
I do want to take a small but complex example and try and benchmark it within various dbs.#2022-07-0606:57PrashantHi,
I have started experimenting with Datomic Analytics.
Intention is to use bundled presto server to run adhoc report queries e.g. number of purchases where time window is between x and y (Time of purchase is stored as a fact).
My transactor is running on-prem.
• I was curious where to put in the https://clojurians.slack.com/archives/C03RZMDSH/p1579027580072600?thread_ts=1578947312.069100&cid=C03RZMDSH files ?
◦ Do they have to be in peer or transactor ?
• Does presto server need to run on the transactor ?
◦ Can I have a separate deployment of presto server?
I would greatly appreciate if anyone can nudge me to any tutorial/walk through for setting Datomic Analytics up since the documentation is really thin.
cc: @dazld{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 1")}
#2022-07-0617:20favilaThis is how to configure it:
Cloud: https://docs.datomic.com/cloud/analytics/analytics-configuring.html
On-Prem: https://docs.datomic.com/on-prem/analytics/analytics-configuring.html#2022-07-0617:22favilaAt a very high level, datomic analytics is a presto/trino installation with a datomic client api connector.#2022-07-0617:23favilait is a separate process: you can run it anywhere that is network-connected to a peer server#2022-07-0617:24favila> Metaschema files are .edn files in the datomic subdirectory of Trino’s etc-dir. Metaschema files can have any name you find convenient, and Datomic analytics will automatically associate metaschemas with any database that has matching attributes.
#2022-07-0617:24favilaFrom the docs#2022-07-0617:27favilaSo you need a “normal” datomic system (cloud or on-prem). If using on-prem you also need a peer-server running (it’s a peer process that provides the client api--cloud only provides the client api). Then you add datomic analytics (presto/trino) and point it at the thing that provides the client-api (peer-server for on-prem, the cloud service itself for cloud).#2022-07-0621:12JohnJprobably worth mentioning that if datomic's datalog isn't a barrier then setting up presto is just pure overhead#2022-07-0621:13favilapresto can handle much larger queries than datomic’s current datalog implementation, and it has much richer aggregation options#2022-07-0621:16favilathe connector uses memory-efficient divide-and-conquer strategies (using undocumented functions that partition attribute indexes into ranges) so that the intermediate result sets in datalog don’t OOM. An equivalent naive datalog query can easily just take too much memory to complete#2022-07-0621:18favilaof course the connector is implemented with the client api so yes, in theory you are right. In practice however, analytics can handle queries with much bigger intermediate result sets using less memory, and often faster wallclock time because of parallelism and reduced memory pressure#2022-07-0621:29JohnJInteresting, don't remember seeing anything about performance/efficiency of the connector in the docs(maybe is included now). So yeah, since it requires the peer server I assumed that's where the bottleneck would be (and maybe storage depending on what one uses)#2022-07-0621:30favilaThe peer server can still be bottleneck.#2022-07-0621:30favilabut the real bottleneck is the datomic query engine is not that smart#2022-07-0621:34JohnJgot it, do you use it in production for non-analytics?#2022-07-0621:35favilawe use it for non-analytics, but not in production#2022-07-0621:36favilawell, maybe it’s considered analytics uses. We’re not doing it for business purposes but for schema maintenance, checking cardinality, counting, etc#2022-07-0621:36faviladata integrity, histograms#2022-07-0621:36favilathat kind of stuff#2022-07-0621:36favilaanything that isn’t a selective query#2022-07-0621:37favilathe equivalent datalog usually doesn’t work at all. d/datoms or index-pull can often do it, but it’s much more thinking and typing#2022-07-0621:37favilaso I hold my nose and type the SQL#2022-07-0621:42JohnJ😉#2022-07-0621:43JohnJdatomic can be become a heavy operational burden#2022-07-0621:44JohnJmaybe they will easy it with cloud and provide pre-setup trino but is still another process to monitor#2022-07-0621:49JohnJby data integrity you mean using trino to check for corrupt data?#2022-07-0621:49favilachecking invariants#2022-07-0706:00PrashantThanks a ton @U09R86PA4 and @U01KZDMJ411.
One question though, metaschema edn files. need to in datomic-pro-<version>/presto-server/etc/datomic/ , right?#2022-07-0711:35cl_jHi, guys, what is the normal range of datomic cloud write throughtput? I get about 30/sec. When testing with in-memory dev-local, I can get 10 times faster.#2022-07-0723:55steveb8nI don’t have answers/numbers but I am interested in this. You will need to be more specific though e.g. which size AWS instance, are you using transaction fns, etc#2022-07-0805:54cl_jhi @U0510KXTU it's running on i3.large, all the throughput are for transactions (by d/transact), not queries are performed during the process.#2022-07-0712:30hanDerPederwhen is datomic/extensions.edn read? can I re-read it without restarting my repl?#2022-07-0814:06AthanHow do you visualize your datomic schema ?
Hi I made a quick search in this channel to find tools for visualizing Datomic schema. The most promising tool I found was https://github.com/hodur-org/hodur-datomic-schema. This is nice because you can create entity types and connect them through attributes. I would like to share with you an example I made. Here is a simple https://gist.github.com/athanhat/bb581d77cfe224c144a19ea4940f406f and here is my https://gist.github.com/athanhat/7aa53c54110cc3258ba52e00128a065c to run and output the result. There is a better https://github.com/hodur-org/hodur-visualizer-schema using GoJS library.
OK that said, I am aware that in Clojure you are trying to break away from entity types, object types in general and work with attribute oriented modeling. But the way you are modeling schema you cannot escape from namespace prefixes and these imply entity types. In any case it is standard practice in any significant large IT project that users want to see and work easily on a white-board with some kind of graph data model that represents an abstract data model. That was the scope of Hodur I guess. In the RDBMs world and more recently in Graph DBMS there are great tools to visualize both the schema and data. So how do you visualize the data modeling process and it's evolution in large projects using Datomic ?#2022-07-0816:07JohnJmaybe malli can help you here https://github.com/metosin/malli#dot{:tag :div, :attrs {:class "message-reaction", :title "heavy_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✔️")} " 1")}
#2022-07-0814:07Alex Miller (Clojure team)you might want to look at https://github.com/JarrodCTaylor/schema-cartographer from @jarrodctaylor on the Datomic team{:tag :div, :attrs {:class "message-reaction", :title "point_up"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("☝️")} " 2")}
#2022-07-0815:30AthanI had a quick look at schema-cartographer.
1. It creates schema-cartographer dependent edn files when you download schema transactions and schema files from their Web APP UI (see schema.edn) and (schema-txs.edn)
2. You have to work everything on the UI. But I think you cannot download and run this independently on your premises.
3. There is this Create a schema file for an existing annotated ON PREM database from a REPL but again you have to load the resulting output file in the UI that resides at https://schema-cartographer.com
4. It is not flexible enough and it will result in poor visualization when you have a big schema and/or you want to view part of the schema
5. The schema-cartographer UI says GoJS 2.0 evaluation (c) 1998-2019 Northwoods Software Not for distribution or production use http://gojs.net
So what tool Cognitect people use to visualize Datomic schema. Is that the tool ?#2022-07-0817:01jarrodctaylor1. If you create a schema in the UI you can download the schema transactions as an edn file you can directly use to create the database you have designed. As you pointed out you can also download a representation of the schema that can be loaded back into the UI. If you like you can annotate your schema yourself without using the UI https://github.com/JarrodCTaylor/schema-cartographer/wiki/Learn-More-About-Conventions-&-Schema-Annotations
2 & 3. What is it you want to run “independently on your premises”? the UI is just a static application there is no server component. I am already paying to host the static files and offing that for anyone to use, but if you want to host it yourself it is also open source https://github.com/JarrodCTaylor/schema-cartographer-ui
4. What is not flexible enough and what is your concern with the visualizations? I developed the application specifically to work with large complex database schema. You can start at any namespace and navigate up to entities that reference it or down to entities that are referenced by it and the UI updates displaying the path you are traversing and maintains breadcrumbs of how you got to the current visualizations. This has been very helpful to me in both support of existing databases and creation of new schema.
5. The developer has stated in multiple places that the evaluation version is usable indefinitely for open source projects as long as the watermark is displayed https://github.com/naver/pinpoint/issues/1308#issuecomment-454047375 The library is not included with the repo but sourced from the cdn linked from the official project page https://gojs.net/latest/download.html https://opensource.stackexchange.com/questions/7424/can-gojs-be-included-into-an-open-source-project/7788#7788
I cannot speak for anyone else but I wrote this tool to visualize schema and use it regularly.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-07-0817:40JohnJIs it possible in cartographer to have an entity with different namespaces?#2022-07-0817:59AthanHi @jarrodctaylor thanks for your reply, it makes clear a lot of things.
1. I was confused with the format of the files downloaded from schema-cartographer and I thought Datomic cannot parse them directly. Then I found a complete example of a schema-cartographer you wrote https://github.com/JarrodCTaylor/schema-cartographer/blob/master/resources/complete_example_schema.clj and realized that in fact you enriched Datomic schema with your annotations. Eventually...
(comment
(d/transact conn {:tx-data annotation-schema-tx})
(d/transact conn {:tx-data ice-cream-shop-schema}))
2 & 3. OK thanks, that is great, I did not know that you have also made available the UI and as you realize there are worries of making a large production database schema of a company visible at some remote address/tool.
4. Well, I am sure you understand that schema-cartographer is definitely not the state of the art in schema - domain model visualization, but who cares ☺️. What I find extremely useful is a basic feature to visualize part of a big schema by selecting related entity types you want or at least starting at a node (entity type) and getting all links to that node (related entity types). I am not sure if that is what you write in your (4) reply. Next comes the ability to grab groups of nodes, shaping the line any way you like, etc...
5. OK thanks that explains perfectly well why we see that message appearing on your application
It's also great that you share this tool with all existing features with the rest of us.#2022-07-0818:13jarrodctaylor4. This is a project developed by myself over many evenings and weekends and I have received no compensation for it. I have provided it for free and hosted portions of it at my own expense for the convenience of others. It has been useful to me and makes no claims of being state of the art.
With that said for the price it is a hell of a deal 😏 and does exactly what you are describing. When you navigate to an entity grouped in the UI by a namespace the visual will be of that entity, its attrs and all next level referenced entities. The NS Details tab displays links to all other entities that reference the one selected and you can navigate up through them as needed. You can also drill down through referenced attrs as well. Both directions preserve previously visited entities and keep breadcrumbs showing the navigation path.#2022-07-0818:16jarrodctaylor@U01KZDMJ411 the current implementation groups “entities” by namespace. This is of course not a full Datomic definition of entity where an entity can be comprised of any number of attrs having any namespace. That is an enhancement that I would like to implement someday but that day hasn’t yet arrived.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-07-1111:42AthanFor anyone reading this thread the best available tool for processing native datomic schemata at the time of reading is schema-cartographer and it is free including the UI web app tool (thanks again @jarrodctaylor).
I have played both with the UI and the serialization of schema into edn files. It takes some time to become adapted to the schema language/format and absorb the differences from the Entity-Relationship diagram and modeling but it definitely worth it.
Newcomers into Datomic DBMS should pay attention that schema-cartographer utilizes Datomic Client-peer server connection, i.e. it requires atomic.client.api but it is not hard to read the code and change it to work with in-memory database and other kinds of connectivity.#2022-07-0817:14Drew VerleeIs there a way to list all datomic https://docs.datomic.com/on-prem/reference/database-functions.html? [edit] this likely isn't related to my problem.#2022-07-0817:40Drew VerleeGiven a lazy-seq abstraction over a HTTP api as described https://www.juxt.pro/blog/new-clojure-iteration how would you then create batches of rows to transact into datomic?
I'm imagining something like:
(->> {:marker 0 :page-size 10} fetch lazy-concat (partition 10) db-transact!)
Is partition going to correctly "fetch" in lazy chunks of 10 to match the page-size? Or might partition try to, behind the scenes, grab more then 10 and cause extra api calls/fetches?#2022-07-0817:59Alex Miller (Clojure team)partition. and lazy sequence functions in general, give no guarantees about how lazy they are{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-07-0818:02Alex Miller (Clojure team)even if you know the behavior of a particular function, it is hard to predict what the actual behavior might be from a mix of chunked and non-chunked lazy seq ops. so if you care, use iteration, or loop/recur, or something else{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2022-07-0914:19Quentin Le GuennecHello, are reverse refs valid in a transaction? Example:
{:person/name "Quentin" :person/_son {:person/name "My beloved mother"}}#2022-07-1103:13Drew VerleeI have low confidence, but i think they are only valid in pull syntax.#2022-07-1103:14Drew Verleein the url, reverse-lookup is nested under query/pull https://docs.datomic.com/on-prem/query/pull.html#reverse-lookup#2022-07-1103:17Drew Verleedoes :db.error/not-an-entity Unable to resolve entity mean that temporary id the datomic tx e.g :db/id {:part :foo, :idx -1001139} isn't valid bc there has to be a matching id in another tx like this https://stackoverflow.com/questions/49278607/datomic-db-error-tempid-not-an-entity-tempid-used-only-as-value-in-transacti suggests? Could someone re-phrase "unable to resolve" or give me a hint as to why it can't resolve it?#2022-07-1107:00steveb8nyep, that’s the reason. you have a “temp” id (which is typically a string) in the txn data but there is no corresponding entity with that same id in the same txn.#2022-07-1107:01steveb8ntypically this means you have an entity with a reference to another entity which is being created in the txn. if the referenced (to be created) entity doesn’t have a matching string :db/id you will get this error#2022-07-1107:03steveb8nso your statement that the matching id must be in “another tx” is incorrect. any string value in a :reference field must match another entity with that string in :db/id in the same tx#2022-07-1402:41Drew Verleethanks! Sorry for the late reply.#2022-07-1111:29robert-stuttafordif we set :db/noHistory false , will indexing backfill historical data too, such that d/as-of will start to see values that were culled by the noHistory behaviour?#2022-07-1112:03favilaIn my experience, this happens only if the segments involved need to be regenerated for some other reason#2022-07-1112:05favilaNo history only seems to mean “the next time I index this attribute, I won't write old values to the history indexes”. It doesn't actively seek to cull old values or consider that a reason to reindex by itself#2022-07-1112:07favilaOh you’re talking about turning it off after it's been on. No, I don't think it will regenerate history but I'm less sure#2022-07-1112:08favilaI think indexing only considers previous index + novel datoms since last index; it won't go through entire tx log#2022-07-1112:41robert-stuttafordthank you#2022-07-1113:15Quentin Le Guenneccan I transact an entity in a way that the history of that entity will show the newly transacted entity in the past?#2022-07-1117:00ennNo. If you need history to be mutable, you should track that history yourself rather than relying on Datomic history.#2022-07-1204:08favilaThere’s one exception only, you can explicitly set the :db/txInstant of a transaction in the “past” (wall-clock time) as long as there is no existing transaction with a higher value. This is designed for initial imports. https://docs.datomic.com/cloud/transactions/transaction-processing.html#explicit-txinstant#2022-07-1218:41Quentin Le Guennec@U09R86PA4 perfect thank you.#2022-07-1202:16Ian Fernandezdatomic on prem, client, is there any way to implement a AutoCloseable for a datomic connection?#2022-07-2208:43jasonjcknnot sure if this answers your question, but have you seen https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/shutdown might be what you want.#2022-07-1202:17Ian Fernandezand for a tx-report-queue ?#2022-07-1208:46favilaI think I’ve reported this before? _ will unify across tuple-destructures:
(d/q '[:find ?a2
:in ?xs1 ?xs2
:where
[(identity ?xs1) [[_ ?a1]]]
[(identity ?xs2) [[_ ?a2]]]]
[[1 2]
[1 3]]
[[2 2]
[2 3]])
=> #{}{:tag :div, :attrs {:class "message-reaction", :title "flushed"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😳")} " 2")}
#2022-07-1208:47favilaWorkaround:#2022-07-1208:47favila(d/q '[:find ?a2
:in ?xs1 ?xs2
:where
[(identity ?xs1) [[?_ignore1 ?a1]]]
[(identity ?xs2) [[?_ignore2 ?a2]]]]
[[1 2]
[1 3]]
[[2 2]
[2 3]]
)
=> #{[2] [3]}#2022-07-1208:47favilaThis is with latest on-prem.#2022-07-1208:48favilaI just lost maybe an hour to this problem, and this is after having seen it before…#2022-07-1322:01AthanHi, I am exploring Datomic system partition and at the same time I am learning its query language. I have a couple of questions.
Here is a simple query to fetch all system idents
(sort-by first (d/q '[:find ?e ?ident
:where
[?e :db/ident ?ident]
[(< ?e 72)]]
(d/db conn)))
and here is a modified version to read doc strings
(sort-by first (d/q '[:find ?e ?ident ?description
:where
[?e :db/ident ?ident]
[(< ?e 72)]
[?e :db/doc ?description]]
(d/db conn)))
1. How can I modify the second query so that it returns the same set of id(s) as the first one but also get the doc string for those id(s) that are associated with it ? What is the equivalent to SPARQL OPTIONAL (see this https://w.wiki/5Sp7) ?
2. I noticed there is a gap of :db/id(s)
[4 :db.part/user]
[7 :db/system-tx]
and I tried to pull the following entities with
(d/pull (d/db conn) '[*] 6)
(d/pull (d/db conn) '[*] 7)
Nothing returned back, any particular reason that these entities with [:db/id 6, :db/id 7] do not exist in Datomic bootstrap schema ?
Another question is what if Datomic wants to expand its bootstrap schema in the future, i.e. add some extra functionality, system ident, etc. Have you reserved id range for that? I have counted 71 system idents. Is there a possibility to collide with user customized schema that starts always from 72 ? Is that correct, am I missing something here ?#2022-07-1408:38Athan1. It seems I found the equivalent of OPTIONAL in SPARQL which is https://docs.datomic.com/on-prem/query/query.html#get-some and the query becomes
(sort-by first (d/q '[:find ?e ?ident ?description
:where
[?e :db/ident ?ident]
[(< ?e 72)]
[(get-else $ ?e :db/doc false) ?description]]
(d/db conn)))
#2022-07-1415:34favilaAnother way to do this is to pull. Generally it’s preferred to use query for finding entities and pull to retrieve data from them, rather than to produce a tabular result that combines entity-finding with field extraction.
(d/q '[:find (pull ?e [:db/id :db/ident :db/doc])
:where
[?e :db/ident]
[(< ?e 72)]]
(d/db conn)){:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-07-1415:35favilaThere is also d/qseq which performs the pull lazily, which can save memory for large results{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-07-1415:35favilaand you can parameterize the pull{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-07-1415:36favila(d/q '[:find (pull ?e pull-expr)
:in $ pull-expr
:where
[?e :db/ident]
[(< ?e 72)]]
(d/db conn) [:db/id :db/ident :db/doc])#2022-07-1416:46AthanSmashing, thanks Francis great tips indeed for a newcomer like me, I did a quick benchmark with 1000 repetitions, the pull version in find specifications is approx 40% faster although the data set is too small to draw safe conclusions.
1. Without pull - Elapsed time: 1.233 secs
2. With pull - Elapsed time: 0.827 secs
So first it finds all the eid(s) then it's mapping a pull on the result set to retrieve data patterns, something like
(map #(d/pull (d/db conn) '[:db/id :db/ident :db/doc] %) eids)
#2022-07-1513:57daemianmackmorning folks! last i heard, about https://clojurians.slack.com/archives/C03RZMDSH/p1637671187422500?thread_ts=1637608575.417700&cid=C03RZMDSH, datomic cloud backup/restore was being worked on.
is there any news on this front? we’re extremely interested in having a solution.{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 1")}
#2022-07-1515:53jarrodctaylorNothing to share publicly at this time. However, I can say for sure it is actively getting a lot of attention.#2022-07-1515:56Daniel JompheMy guess is that Cognitect is taking its time to develop a solution that'll share as much common parts as it makes sense under the hood for both OnPrem and Cloud. They're also probably trying to make it easy. ...we'll see once public announcements come out.#2022-07-1515:59daemianmackthanks @U0508JRJC! glad to hear that — i’m eagerly awaiting further news.#2022-07-1516:00Daniel JompheYes Jarrod, the comment is appreciated!#2022-07-1516:02jarrodctaylorI am not the only team member looking forward to shouting it from the roof tops when we have something to share 🙂{:tag :div, :attrs {:class "message-reaction", :title "100"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💯")} " 1")}
#2022-07-1516:02daemianmackrelated, also curious if anyone has any anecdotes, good or bad around community solutions.
i’m aware of 2…
• fulcrologic/datomic-cloud-backup — haven’t finished reading the source here. it looks robust, but the repo is archived and the README mentions lack of suitability for (unspecified) production needs.
• lambdaforge/wanderung — haven’t started looking at this yet, just found mention of it in this channel.#2022-07-1516:06Daniel JompheOur project's repo still has the needed code to execute backups and restores based on fulcrologic's solution. I don't remember how well it worked or not, though - I exercised it to some point for sure, but before concluding it could save us in dire situations, we'd need to invest a lot more time in trying it out. And the author, Tony Kay, FWIR, is also waiting upon Cognitect. I think I remember him saying he reached limits and constraints making it so that only Cognitect can provide a robust and practical solution.#2022-07-1516:42daemianmackthanks! i found some FUD in an earlier thread around tuple-handling with datomic-cloud-backup too, but haven’t had a chance to dig into it.#2022-07-1515:59Daniel JompheTwo or three team members completed https://max-datom.com/ recently.
AFAIK they all liked it enough to go through the end.
Our most experienced members say they did it in a few hours.
Our new member (completely new to Datomic and Clojure) seems to have spent a day.{:tag :div, :attrs {:class "message-reaction", :title "sunglasses"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😎")} " 1")}
#2022-07-1516:01Daniel JompheThey all did http://www.learndatalogtoday.org/ before doing Max Datom. You, the old one that's been around for a number of years...#2022-07-1516:02jarrodctaylorThanks for the feedback. That if great to hear 🙂#2022-07-1516:03Daniel JompheI wish I had more feedback!
I hope I can take some time to do it myself in the next months.#2022-07-1516:38AthanHi Daniel, your wish becomes true 🙂 here is some feedback as I am doing it....
LEVEL 3
(d/q '[:find ?v ?e
:where [?e :author/first+last-name ?v]] db)
=== Incorrect Query Response ===
[[["Miguel" "Dvd Rom"] 87960930222171]
[["Perry" "Farrell"] 87960930222172]
Better warn the user that order is important here
[:find ?e ?v ...]
#2022-07-1516:49AthanYour schema txs are syntactically incorrect, there is a closing parenthesis
:db.type/tuple)
instead of a closing curly brace in the following segment
{:db/id ":book/name+author"
:db/ident :book/id+name
:db/valueType {:db/ident :db.type/tuple)
:db/cardinality {:db/ident :db.cardinality/one}
:db/unique {:db/ident :db.unique/identity}
:db/tupleAttrs [:book/id :book/name]}#2022-07-1517:10AthanLevel 5
Replace the * pattern with
:book/name {:book/author [:author/first-name :author/last-name]}
to retrieve the desired attributes.
I would prefer replace [*] with
[:book/name {:book/author [:author/first-name :author/last-name]}]
Otherwise it's a bit confusing for the user#2022-07-1517:29AthanLevel 6
Modify the query to return any entities referencing the id in the query as a value for :book/author using the underscore prefix syntax.
(d/q '[:find (pull ?e [:book/_author])
:where [?e :author/id #uuid "14E86ACF-000B-463E-90CB-CEA0927A97DA"]] db)
=== Incorrect Query Response ===
[[{:book/_author [{:db/id 87960930222175} {:db/id 87960930222176}]}]]
=== Expected Query Response ===
[[{:author/first-name "Napoleon",
:author/last-name "Desktop",
:book/_author [{:db/id 87960930222175} {:db/id 87960930222176}]}]]
My query above does exactly what the tutorial is asking, add in the exercise "`retrieve the first and last names of the author and any entities.....` but that is what tutorial takes as a correct answer
(d/q '[:find (pull ?e [:author/first-name :author/last-name :book/_author])
:where [?e :author/id #uuid "14E86ACF-000B-463E-90CB-CEA0927A97DA"]] db)#2022-07-1517:56AthanLevel 8
(d/q '[:find (pull ?e [:author/first-name :author/last-name])
:in $ ?author-id
:where [?e :author/id ?author-id]] db)
Max Datom informs me Human, you must include the author-id as an argument to the query.
That's because I did not see the
(def author-id #uuid "35636B79-EE46-4447-8AA7-3F0FB351C45C")
and it's a bit confusing that both the argument and the input variable has the same name author-id.
Hmmm
(def author-id #uuid "35636B79-EE46-4447-8AA7-3F0FB351C45C")
(d/q '[:find (pull ?e [:author/first-name :author/last-name])
:in $ ?author-id
:where [?e :author/id ?author-id] db author-id)
Max Datom message EOF while reading
I had to do refresh and renter the query
(d/q '[:find (pull ?e [:author/first-name :author/last-name])
:in $ ?author-id
:where [?e :author/id ?author-id]] db author-id)
#2022-07-1518:11AthanLevel 10
I am presenting you with your very own Max Datom (TM) polo shirt 🙂
Sure send it to my home address please....
...
I will not spoil the fun writing the query here but again order matters
:find (count ?posts) ?user-name
does not allow you to continue in the next level...#2022-07-1519:15AthanLevel 11
That is interesting....
(d/q '[:find (count ?post) ?user-name
:where [?user :user/first+last-name ?user-name]
(not [?post :post/dislikes ?user])] db)
:db.error/insufficient-binding [?post] not bound in not clause: (not-join [?user ?post] [?post :post/dislikes ?user])
I am trying to understand why as this query without negation works fine although it's not the correct response
(d/q '[:find (count ?posts) ?user-name
:where [?user :user/first+last-name ?user-name]
[?posts :post/dislikes ?user]
] db)
=== Incorrect Query Response ===
[[3 ["Segfault" "Larsson"]]]
=== Expected Query Response ===
[[1 ["E. L." "Mainframe"]]]
OK I read documentation, it seems that this is a not-join case because ?posts is in the :find var clause. So I guess the example given in that level is misleading, correct ?
Oh boy that doesn't work too
(d/q '[:find (count ?post) ?user-name
:where [?user :user/first+last-name ?user-name]
(not-join [?post] [?post :post/dislikes ?user])] db)#2022-07-1519:31jarrodctaylorThanks for all of the feedback. I don’t believe the example is misleading and the solution does indeed use a not clause.#2022-07-1519:35AthanPerhaps I am confused from the documentation and the error message... :db.error/insufficient-binding [?post] not bound in not clause: (not-join [?post] [?post :post/dislikes ?user]) why it complains about my answer then, where is my mistake ?#2022-07-1519:43jarrodctaylorTry adding a clause to bind ?posts for your ?user then only use [?posts :post/dislikes] in the not clause.#2022-07-1520:00AthanOK yes I got it right this time, thanks for the tip, and I will not write the answer to spoil the fun.... but it seems it deviates a bit from the example in Level 12.
Second, my tip for someone that tries to solve it is to understand that negation here implies set difference operation. That is explained well in Datomic documentation - https://docs.datomic.com/cloud/query/query-data-reference.html#how-not-clauses-work.#2022-07-1521:05AthanLevel 12 is more or less straight forward, I solved it, but I still have some problem to understand pull syntax and why it requires vector for [:post/comments :xform ...] in order to convert it to another key-value pair in the result set. To be continued...#2022-07-1808:12AthanLevel 15 is tough, one has to see the response and then try to replicate it with the correct query format...#2022-07-1812:05AthanDone, I reached Level 17. I think the fraud detection scenario (L15, L16, L17) was not clear, i.e. what account they tried to conceal, for what purpose and which account they present as the legitimate one.
In Level 15 it says
"That transfer looks like we would expect, but why did the funds not make it to Mr. CD? Continue to level 16 to investigate further."
But the response shows that transfer destination was Mr. Muhammad CD and therefore it appears that funds were transferred to him...
That is what I figured out:
Fraudulent account of user Spammy the Bull (destination of original transfer in the past)
Legitimate account of user Muhammad CD (what appears to be the current destination of transfer)
Anyway, in overall it was a good experience playing with Max Datom and I would like to thank @U0508JRJC for his time to share this tutorial with the rest of us#2022-07-1616:02BenjaminJo I tried to bump com.datomic.ion
Downloading: com/datomic/ion/1.0.59/ion-1.0.59.pom from datomic-cloud
Error building classpath. Failed to read artifact descriptor for com.datomic:ion:jar:1.0.59
I'll try go back to 1.0.58#2022-07-1618:07Benjamin1.0.58 works fine#2022-07-1707:57BenjaminDownloading: com/datomic/ion-dev/1.0.306/ion-dev-1.0.306.pom from datomic-cloud
Downloading: com/datomic/ion-dev/1.0.306/ion-dev-1.0.306.jar from datomic-cloud
Error building classpath. Could not find artifact com.datomic:ion-dev:jar:1.0.306 in central ()
seems like the same isssue with ion-dev 1.0.306#2022-07-1805:20onetomAre there any recommendations somewhere about modelling multi-tenant data?
Should we have a single tenant reference attribute, which is used on every kind of entity or should we have a ref attribute for every "entity-type"?
eg:
{:tenant/ref [:tenant/uniq-id 123] :user/id 345}
{:tenant/ref [:tenant/uniq-id 123] :frob/id 678}
vs
{:user/tenant [:tenant/uniq-id 123] :user/id 345}
{:frob/tenant [:tenant/uniq-id 123] :frob/id 678}#2022-07-1805:26onetomthe 1st solution would allow using a single, common and also simple datalog :where clause or rule, to limit queries to a specific tenant, so it feels less error-prone.
the 2nd solution allows operating on the whole collection of entity-types for a specific tenant, by simply using pull expressions, with a reverse lookup ref, eg. (d/pull db-val-for-all-tenants [:frob/_tenant ['*]] [:tenant/uniq-id 123]), which would return all frobs of tenant 123.#2022-07-1805:35onetomif i have a single :tenant/ref, i would always need to write datalog queries, for operations, which would be effectively just simple pulls, then i would also need to unwrap them from the resulting, single element vectors, since there is no find-scalar option for Datomic Cloud (https://docs.datomic.com/cloud/query/query-data-reference.html#arg-grammar):
find-spec = ':find' find-rel
find-rel = find-elem+
find-elem = (variable | pull-expr | aggregate)
like for Datomic On-Prem (https://docs.datomic.com/on-prem/query/query.html#query):
find-spec = ':find' (find-rel | find-coll | find-tuple | find-scalar)
find-scalar = find-elem '.'
find-elem = (variable | pull-expr | aggregate)
#2022-07-1806:22onetomThis talks mentions how filtered databases on Datomic On-Prem, combined with recursive datalog rules can safeguard per-tenant information retrieval: https://youtu.be/7lm3K8zVOdY?t=919
we would like achieve the same with Datomic Cloud, but it's unclear how.#2022-07-1810:08onetomforgot to mention that in the above examples, that those :user/id and :frob/id are IDs of 3rd-party systems, so I also had to use a composite key, with a unique constraint, like this:
{:user/id 345
:user/tenant+id ["tenant-123" 345]
:tenant/ref {:db/id "tenant-123"
:tenant/uniq-id 123}}
to keep 3rd-party data from multiple tenants within the same DB.#2022-07-1906:07Brendan van der EsWhats the best way (if there is one) to develop datomic ions in java? In on-prem I would've transacted java functions to db for use in peers so I'm curious I'm curious if there is something analagous for ions.#2022-07-1912:55cl_jwhen iterating through the results of d/tx-range I got the Request Entity Too Large error. Setting smaller range with :start and :end parameters helps, but it's still very easy to see this error with some datoms. Previously I can retrieve many more datoms in one d/tx-range request. Any suggestions?
clojure.lang.ExceptionInfo: Response body did not conform to Datomic client protocol
cognitect.anomalies/category: :cognitect.anomalies/incorrect
cognitect.anomalies/message: "Response body did not conform to Datomic client protocol"
http-result: {:status 413,
:headers
{"apigw-requestid" "Vg4UxisCDoEEJsQ=",
"server" "Jetty(9.4.44.v20210927)",
"connection" "keep-alive",
"content-length" "38",
"date" "Tue, 19 Jul 2022 12:17:47 GMT",
"content-type" "application/json"},
:body "{\"message\":\"Request Entity Too Large\"}"}#2022-07-2108:59AthanHi, I am playing with https://github.com/Datomic/mbrainz-sample
It has been restored from a Datomic backup following the instructions and it is running on premises with a Postgres transactor and datomic.api (datomic in-process peer library).
I executed this query that uses a rule for full-text search
"https://github.com/Datomic/mbrainz-sample/wiki/Queries#what-are-the-titles-artists-album-names-and-release-years-of-all-tracks-having-the-word-always-in-their-titles?"
And I am getting back => #{ }
A little investigation on the attributes of :track/name shows that
(d/attribute db :track/name)
=>
#AttrInfo{:id 80,
:ident :track/name,
:value-type :db.type/string,
:cardinality :db.cardinality/one,
:indexed true,
:has-avet true,
:unique nil,
:is-component false,
:no-history false,
:fulltext false}
Therefore full-text search is not enabled. I tried to modify the schema by adding the :db/fulltext attribute
(d/transact conn [[:db/add [:db/ident :track/name] :db/fulltext true]
[:db/add "datomic.tx" :db/doc "enable full-text search for :track/name"]])
;; but it fails....
;; :db.error/invalid-alter-attribute
;; Error: {:db/error :db.error/unsupported-alter-schema,
;; :entity :track/name, :attribute :db/fulltext, :from :disabled, :to true}"
So how am I supposed to run this query with full-text search, any suggestions ?
PS: I made a quick search but I did not find any useful answer on this case. Moreover I did not see any information about how to do full-text search on the https://github.com/Datomic/https://github.com/Datomic/mbrainz-sample repository.#2022-07-2210:41pyryhttps://docs.datomic.com/on-prem/schema/schema-change.html#schema-alteration#2022-07-2210:41pyryOnly specific schema attributes can be changed as per the above reference; specifically, you can never change if an entity should be indexed for full-text search or not.#2022-07-2210:42pyryThe second category of alteration is altering schema attributes of attributes. The supported attributes to alter include :db/cardinality, :db/isComponent, :db/noHistory, :db/index and :db/unique.
You can never alter :db/valueType, :db/fulltext, :db/tupleAttrs, :db/tupleTypes, or :db/tupleType.
#2022-07-2210:44pyryAs for how to run the query without full text search, you could instead try using Java interop as specified in https://docs.datomic.com/on-prem/query/query.html#calling-instance-methods.#2022-07-2210:46pyryA suitable candidate for this could be eg. https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/String.html#contains(java.lang.CharSequence).#2022-07-2120:44favilaThere’s an on-prem setting datomic.readConcurrency for peers (not just transactors) according to https://docs.datomic.com/on-prem/configuration/system-properties.html#peer-properties. It says the default is 2x write concurrency…but peers don’t have write concurrency. What is the actual default?#2022-07-2220:37Joe Lane@U09R86PA4 default writeConcurrency is 4, making default readConcurrency 8. I understand peers don't have write concurrency, you'll just have to trust me 🙂#2022-07-2208:55jasonjcknI want to fetch any & all ‘movies’ where any one of it’s attributes were changed after t1 , here’s what I came up with…
(def t1 #inst "2022-07-21T07:27:18.190-00:00")
(d/q '[:find ?e
:in $ ?t1
:where
[(ground [:movie/id
:movie/description
:movie/name]) [?a ...]]
[?e ?a _ ?tx]
[?tx :db/txInstant ?tx-t]
[(> ?tx-t ?t1)]]
*db* t1)
Is this the right approach? Or is there a better way to do this.
(Also, should I rely on :db/txInstant in your experience, or reify my own, e.g. an attribute like #2022-07-2212:46favilaUse a db with a (d/since db t1) filter applied instead{:tag :div, :attrs {:class "message-reaction", :title "pray"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙏")} " 1")}
#2022-07-2212:47favilahttps://docs.datomic.com/on-prem/time/filters.html{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-07-2217:49jasonjcknIs this a datomic bug… seems odd behaviour to me…
(<= (clojure.core/inst-ms* (java.util.Date.))
(clojure.core/inst-ms* (java.time.Instant/now))
(clojure.core/inst-ms* (java.util.Date.)))
;; always evals => true
(d/q '[:find ?t1 ?t2 ?t3
:in ?t1 ?t2 ?t3
:where
[(<= ?t1 ?t2)]
[(<= ?t2 ?t3)]]
(java.util.Date.)
(java.time.Instant/now)
(java.util.Date.))
;; always evals to => #{}
The issue seems to be that since Instants have no timezone information, Datomic treats the value of Instant as local clock time, when in fact (java.time.Instant/now) returns a UTC time value (without timezone information).
If it’s a bug, I can file a bug report, let me know where.#2022-07-2521:32DaveI'm interested to know if anyone experienced at building building enterprise applications using Datomic as a database, has experienced advantages or disadvantages with the use of :db.type/ref value types in the creation of their schema. For instance, consider the two schema snippets below which could represent two ways to model a person's first name in Datomic.
{:db/ident :person/first-name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
-or-
{:db/ident :first-name/name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/unique :db.unique/value}
{:db/ident :person/first-name
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
In the second model, just to be explicit, :person/first-name is a ref to :first-name/name. I know storage is cheap so that is not in and of itself is not a reason to prefer the second one. But that aside, and say you have hundreds of thousands or even millions of "fred"s in your database, what would be the advantages or disadvantages of each model (if any) under these circumstances?#2022-07-2523:46DaveThanks @U0LAJQLQ1. Taking this to a thread to keep everything in one place in case you care to respond further. Appreciate your time.
> you probably want to use a component, unique is very weird in this case
As in :db.isComponent? :db.unique/value in this case just means you can't have two :first-name/names in Datomic with value "fred". So :person/first-name is just a CaS operation, e.g., my first name used to be Fred, now it's Ralph. Again, just an example as I know folks don't often change their names, but think entities where mutations, at least up until a explicit time whose attribute grouping or :<namespace>/<name>s we control, are common, i.e., today I'm Fred, tomorrow I may be Ralph, and the next day back to Fred or a different name altogether.
> when you use refs, accessing and updating data is more of a pain in the ass
Can you be more specific? In our app, updating data is exclusively a user exercise, i.e., they are telling the system whether they are "Fred" or "Ralph".
> if you are considering the second use case, you prob are going to do that for almost every field in your database, so you can imagine that you are going to need to do a lot of extra work to manage the refs wherever you use them. you are also diluting the meaning of refs for the rest of your data. a ref should be meaningful. adding another ID to a string for all attributes seems insane
Our app will have some very complex entity types, i.e., if were were to build them via flat structures, they could have thousands of fields or attributes that our users have to specify each time they create that entity. Since our app is all about making our users more productive, we've broken entity types down and made them much more granular but still meaningful with regard to the domain workflow. So now, instead of a user having to fill in thousands of fields of mostly scalar values to specify an entity, they are simply refing one to ten or more entities they or someone else has already defined (think Rich Hickey talking about composability in programming only it's composability for our specific domain).
> every time i had done something like what you did in the second example, i have regretted it and changed it to be a flat structure.
Why was that? I ask because with our app, we have no external data sources, i.e., there is no concept of mapping data (or names) to another source, system or database.#2022-07-2523:53pppaulfirst, i need to clarify something. is your second model to primarily save space in the database? because i first thought that you were treating the name as an ID in one of the examples (common use of unique is to do ref lookups). eg (d/pull db '[*] [:person/name "fred"]) , i was not considering the objective of saving space.#2022-07-2523:58pppaulupdating a ref is a pain depending on how you access it. typically if you are swaping around refs you want to make sure you don't have zombie refs. in the case you describe if nobody has a name that refs a :first-name/name, then you may want to delete that. it's an easy way to make zombies when you update a ref and forget to have the ID attached, so you make a new object instead of updating one.#2022-07-2523:58pppaulin this case your ID is the value, so you avoid one pain point, but now you may want to run a GC on your system to clean up unused names#2022-07-2600:04pppaula flat structure in datomic looks as large as it has existing fields. also it looks as large as the pull request at other times. doing pull request on refs requires a lot more work, especially if they are back references. usually renaming, default values, and maybe translations are included in the pull. if you are using db/idents then the pulls involve a bit more work. if you are taking the first-name/name approach then you probably will have a ton of db/idents as keyword refs as well.#2022-07-2600:14pppaulI don't really understand the problem you are solving. if you are making a system where ent fields are mostly indirect, and you want the fields to be immutable, it sounds a bit interesting, but i wonder what the queries will look like. the pull request are going to look bad, you'll probably want helper functions to make them for you (cus you can't use components). i use db/idents everywhere in my system as keywords, and they are annoying. i'm guessing that on your system the string IDs are an important feature for user discovery?#2022-07-2602:51DaveThanks for being so generous with your time @U0LAJQLQ1.
> first, i need to clarify something. is your second model to primarily save space in the database?
No. Although my guess (that's all it is, based on how many bytes a primitive, e.g., double, in Java takes vs a reference) is it would. My reason for exploring the second model is based purely on the fact that I see the world in a bottom-up-fashion and I'm wondering whether a data model based on the way I see the world, could make people doing physical-world things (like manufacturing, which is my subject matter expertise) more productive. Apologies for the lack of context but I find adding too much info in your original post can lead to people just gloss over and not engage or reply. In our data model, as it relates to my 'person' example, I've added the following for clarification. Note: :db.type/string could be be :db.type/uuid or :db.type/long in the case of :<>/id, we just chose to make it a string.
{:db/ident :person/first-name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
{:db/ident :person/id
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/unique :db.unique/value}
-or-
{:db/ident :first-name/id
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/unique :db.unique/value}
{:db/ident :first-name/name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/unique :db.unique/value}
{:db/ident :person/first-name
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
> updating a ref is a pain depending on how you access it. typically if you are swaping around refs you want to make sure you don't have zombie refs. in the case you describe if nobody has a name that refs a :first-name/name, then you may want to delete that. it's an easy way to make zombies when you update a ref and forget to have the ID attached, so you make a new object instead of updating one.
In our domain case, not sure zombie refs would be a big deal, especially if we choose to make :person/first-name :db.noHistory true.
> in this case your ID is the value, so you avoid one pain point, but now you may want to run a GC on your system to clean up unused names
Given the additional context above, the value of :first-name/id is still (just) a visual identifier (what our user sees on the screen) but in terms of the :db/id, the single, database-wide entity that is "fred", is represented as a long in the datom, and that long is refd in :person/first-name like:
E A V
[001] :first-name/id ER85W23QQ81
[001] :first-name/name fred
[002] :person/id FG53Q211KOL
[002] :person/first-name [001]
> if you are making a system where ent fields are mostly indirect, and you want the fields to be immutable, it sounds a bit interesting,
Bingo. That's what I mean by 'single, database-wide entity' above. Defining "fred" in terms of a string instead of a person attribute and specifying it as :db.unique/value is such an 'immutable field'. This is not necessarily natural for us humans to do but as a bottom-up thinker, it's my default pov. Most hear "fred" and immediately think, "person's name". But if we want to make "fred" (or any scalar value for that matter) as reusable (use it to compose other entities) as possible, why not make it its own, immutable entity? Btw, Googling "Immutability" 7 years ago is what led us to Rich Hickey's talks, which led us to choose Clojure/Datomic in the first place.
> i'm guessing that on your system the string IDs are an important feature for user discovery?
Not sure exactly what you mean by 'user discovery' but the use of the :<namespace>/id attribute was intentional for querying purposes.#2022-07-2602:59pppaulthe main issue i can see from this design is that you are going to have indexes on each of these value IDs, so you may want to look into the consequences of that. also all of your queries will have a layer of indirection in them, but queries in datomic tend to be pretty small, so i'm not sure it's too important. i think you also lose lookup refs, or your lookup refs will all have backtracking pulls.#2022-07-2603:01pppaulso, you'll have to make a bit of a DSL if you want to make your code look more like regular datomic code (all reverse lookups return lists, usually of a single item). but if you are consistent in your data model then it shouldn't be hard to make some helper functions/macros that you use everywhere. that ends up sorta happening in normal use anyway.#2022-07-2603:02pppaulhttps://docs.datomic.com/on-prem/query/indexes.html#avet
https://docs.datomic.com/on-prem/query/indexes.html#VAET#2022-07-2603:04pppaulyou are making AVETs on many ents, so you need to explore how this effects datomic, as the docs say this is expensive.#2022-07-2603:06pppaulyou may want to look into XTDB as well, i don't know much about it, but it seems fairly active, so people should be able to tell you if you are going to break their DB or not. it has a lot of similarities with datomic, but doesn't seem like you can painlessly swap one for another (different APIs)#2022-07-2603:08pppaulyou may have to keep your own index outside of datomic if your design is going to break datomic (like a KV store, rocksDB or something)#2022-07-2606:25seepelI'm curious what are the advantages of making fields indirect? It sounds like a pain and I'm struggling to come up with a use case that it would serve.#2022-07-2617:15Dave@U03NXD9TGBD, RH discusses this in his https://github.com/matthiasn/talk-transcripts/blob/9f33e07ac392106bccc6206d5d69efe3380c306a/Hickey_Rich/PersistentDataStructure.md talk if you haven't watched it before. He sites indirection in the transcript 4 times. Even though I'm not a programmer, I was able to relate to this as soon as I listened to this talk. Actually found myself smiling and nodding in agreement with every RH talk I've listened to thus far. As a user of manufacturing and supply chain SORs (systems of record), I've endured decades of "identity, state, and values" hell because of the way these systems were designed (underlying data models). There's a huge and unnecessary cognitive load placed on supply chain information workers using these SORs and that's zapping productivity. The importance of "immutability" in SORs cannot be overstated imo. All that said, I can see why any programmer might see indirection as a pain or at least an inconvenience. Simple Made Easy is another great one to watch if you haven't already. simple_smile#2022-07-2617:22pppaullayers of indirection have a cost, sometimes i find myself removing abstractions from my code because debugging them is hard, or understanding them is hard, or i didn't need them in the first place. sometimes abstractions are needed to solve problems, or let users hook into your system to implement their specialised solutions. i think in your case with datomic, the main downside is the indexes. also it may be the case that you want this property for some of your data, and not all, maybe not even much.#2022-07-2617:24pppaulOne of the bigger problems that i have run into when building software, is that most people making the system (non-devs, but also devs) don't think about change in the system. business demands that things change, but when those things are binding contracts, well that sounds like a bad idea. people have a very big problem when it comes to identifying when something should stop changing. datomic and your idea of how to use it, do not cover that problem. building things that make sense is a very hard job, and it's not really a programming problem.#2022-07-2618:53Dave> One of the bigger problems...
Couldn't agree more @U0LAJQLQ1. Business stakeholders and devs alike, haven't given the necessary hammock time to solving many of these hard problems. There's a myriad of reasons on both sides that could fill a book as to why and how this happens. I'd like to think our small team is different and will indeed solve many of them given the 7 years of hammock time we've put into the effort IP we've developed but we won't know until we commercialize. There are other facets to our application intended to deal with change. Datomic schema is but one of them which I happen to be focused on right now.
> layers of indirection have a cost...
Can you give an example of your having to use an abstraction to solve a problem and an example of what you mean by specialized solution? I know indexes have costs so best to avoid whenever possible. To that end, any elaboration as to why indirection inhibits the use of lookup refs is appreciated. I looked through the documentation and it's not readily apparent to me.#2022-07-2619:03pppaulmultimethod and protocols allow for a type of abstraction that allows users to hook in and create their own solutions to sub or whole problems, while working with the larger system (like plug-ins). embedded languages also allow this, typically referred to as scripting. currently I'm dealing with http://Sentry.io (you may want to use that as well), and building error reports have some well defined structure. I use multimethods to have different types of errors build parts of their own error report. at the same time I am testing the Sentry sdks, and for that I don't want any abstractions, I just want raw data to test with. one of the major costs of abstraction is debugging becomes expensive, other people have trouble maintaining your code. there has to be a big payoff for a big abstraction. SQL is an example of a big abstraction with a big payoff, but good luck asking a random dev to fix anything in postgres core#2022-07-2619:47DaveThanks @U0LAJQLQ1 for being so generous with your time. I tend to think of abstractions with respect to https://en.wikipedia.org/wiki/Type%E2%80%93token_distinction which doesn't help much when your dev team is trying to explain and have you understand their definition of an abstraction. I struggle mightily to understand it from a dev's pov. What's your best, general definition of an abstraction given the way you're using it above?
Also, would still like your take on how indirection inhibits the use of lookup refs assuming that's an accurate interpretation of what you're saying above. Perhaps there's something in the Datomic documentation you've read that I've missed that could provide additional context.#2022-07-2620:31pppaulthe article you link to is talking about a certain type of abstraction. example, programmers don't write code to operate on 8-bit chunks of memory, we write code to operate on something like an integer, or decimal, or text, or list. https://en.wikipedia.org/wiki/Integer_(computer_science) look at how many ways there are to represent and int in programming. it's insane and most programmers just abstract away all of that until they run into a problem (code being slow, code taking up too much memory). we work with something where we don't know how many bits it is, and it could change in size depending on certain things happening in the program. we do that with lists as well, lists are rarely fixed sizes, programmers have no idea what their lists look like in memory, but they know how to add something to a list. in higher level languages behaviour becomes an important abstraction. that's what i was talking about with regards to multimethods, protocols, and scripting (interfaces) https://en.wikipedia.org/wiki/Abstraction_(computer_science) . also mentioned in that article, and the main reason why i found lisp/clojure is language abstraction. sometimes the best way to solve a problem is to create a domain specific language for it, like SQL or Datalog (datomic's language).#2022-07-2620:35pppaulfor the ref lookups, you just aren't going to be looking up the direct entity that you wants [:first-name/name "fred"] isn't going to point to something you care about, you'll have to do something like (pull [{:first-name/__name [:person/first-name]})_ and then you'll get a list of all the people with "fred" as a name, then you have to figure out what one you want. it becomes very useless except for the exact scenario of getting everyone with the name "fred", which you can do with a 1 line query anyway. so you lose all ability to use lookup refs which are a pretty big deal. you'll have to do queries for every data fetch. in my system queries are a special case, and 95% of my db reads are with pulls via lookup refs#2022-07-2622:20Dave> it's insane and most programmers just abstract away all of that...
Got it. And it is insane! It's the result of 40+ years of accumulating "incidental complexity" . Can definitely relate to it. In our https://edgewoodsoftwarecorp.s3.us-east-2.amazonaws.com/CoalesceIntroduction.mp4 (best to avoid Firefox for best audio if you have 10 mins. to watch it) we highlight systems interoperability as one of the underlying root causes of the entrenched, difficult supply chain problems we're trying to solve. Mapping disparate data models has reached its max. scale imo. Those in the enterprise software space still employing this approach are in an endless cycle of just swapping customers at this point. Which is why we're building a large, horizontal (end-to-end product value chain) application that doesn't rely on any external data sources.
As for lookup refs, I thought that might be what you were getting at. Reverse lookups will be needed, e.g., a user needs to filter a set of higher order entities that include "fred" (common constituent entity). But in the user's day-to-day interaction with the system performing their work, i.e., creating and managing specifications or digital twins of physical referents (should make much more sense to you if you watch the video), their entry point is :person/<> and not :first-name/<>.#2022-07-2521:34pppaulnot everyone has a first name#2022-07-2521:34pppaulyou probably want to use a component, unique is very weird in this case#2022-07-2521:35pppaulyou can always migrate your data#2022-07-2521:36pppaulwhen you use refs, accessing and updating data is more of a pain in the ass#2022-07-2521:38pppaulif you are considering the second use case, you prob are going to do that for almost every field in your database, so you can imagine that you are going to need to do a lot of extra work to manage the refs wherever you use them. you are also diluting the meaning of refs for the rest of your data. a ref should be meaningful. adding another ID to a string for all attributes seems insane#2022-07-2521:44pppaulevery time i had done something like what you did in the second example, i have regretted it and changed it to be a flat structure. when i do have refs for aesthetics, they are usually components, and represent a grouping of related data.#2022-07-2616:56favilaUsing latest on-prem I see a fair number (10 an hour? rate seems irregular) of :valcache/put-exception with the exception java.nio.file.NoSuchFileException thrown out of datomic.valcache$direct_put. Should I be concerned?#2022-07-2616:59favilaWe’re using amazon linux 2 in aws, the disks are the ephemeral nvme ssds, the filesystem is xfs, and lazytime and strictatime are set#2022-08-0114:44Joe Lane@U09R86PA4 Can you send a full stack trace to support? My initial reaction is that you shouldn't need to worry but we still want to do a deep dive.#2022-08-0423:15faviladone, #3638{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-07-2717:53cjmurphyIs Stack Overflow still a reasonable place to ask Datomic questions that can be easily google searched in the future? Anyway here I've written one: https://stackoverflow.com/questions/73142317/in-datomic-how-to-add-new-entities-as-references-to-the-many-attribute-of-an#2022-07-2718:47ghadisee the /topic to this channel#2022-07-2718:48ghadi(http://ask.datomic.com is the way)#2022-07-2719:03cjmurphyOkay thanks I'll delete the public one when it just looks like hanging around as unanswered.#2022-08-0121:15Nedeljko RadovanovicHi guys,
I need to retrieve id's from all users in database that are older then 30 days, every user has creation date.,
(let [conn (client/get-conn)
db (client/db)
last-month (t/minus (t/now) (t/month 1))
q '{:find [(pull ?eid [*])]
:where [[?eid :user/created]]}]
;;.............
)
I have last-month #inst date, I just dont know how to get all users "older" then 30 days.
I am new in datomic so,
if someone can help, thank you in advance...#2022-08-0210:17souenzzo(let [conn (client/get-conn)
db (client/db)
;; using java.time
#_#_last-month (Date/from (.minus (Instant/now)
(Duration/ofDays 30)))
last-month (t/minus (t/now) (t/month 1))]
;; get the created from the instant where user/id was created
#_(d/q '{:find [(pull ?eid [*])]
:in [$ ?last-month]
:where [[?eid :user/id _ ?tx]
[?tx :db/txInstant ?created]
[< ?created ?last-month]]}
db last-month)
(d/q '{:find [(pull ?eid [*])]
:in [$ ?last-month]
:where [[?eid :user/created ?created]
[< ?created ?last-month]]}
db last-month))
#2022-08-0210:38Nedeljko RadovanovicThank you#2022-08-0210:42souenzzoIDK what t/minus returns, but d/q requires something that look like a java.util.Date#2022-08-0210:42Nedeljko Radovanovicit returns a instance like this #<DateTime 1986-10-14T04:00:00.000Z>#2022-08-0210:43Nedeljko Radovanovici am saving creation date as Date instance so this solution works best right now, if i didnt have this solution i would need to change how create time is written in database, and to replace (new Date) with (t/now)#2022-08-0210:44Nedeljko Radovanoviclast-month (t/minus (t/now) (t/month 1))
this is from clj-time library#2022-08-0200:12pppaulyou need to use :in in your query and add a where for comparing the date. there are examples in the docs#2022-08-0201:48pppaulhttps://docs.datomic.com/on-prem/query/query.html#multiple-inputs#2022-08-0201:49pppaulhttps://docs.datomic.com/on-prem/query/query.html#predicate-expressions#2022-08-0201:50pppaulthe docs are very useful for learning datomic, you really need to read most of them, multiple times. there are also example repos like day-of-datomic, but i found reading the docs to be all i need until i get into dealing with querying the log-db or other fairly advanced features.#2022-08-0211:01jumarAre there any good db clients that support Datomic?
I'm aware of Datomic Console but maybe something more advanced and easier to use.#2022-08-0211:11souenzzohttps://docs.datomic.com/cloud/other-tools/REBL.html#2022-08-0211:28Kris Cthis is only for datomic-cloud, right?#2022-08-0211:30souenzzodatomic on-prem still do not support this?!#2022-08-0211:33Kris CAh, it's just an "interactive tool for browsing clojure data"... Not really Datomic-specific.
https://github.com/cognitect-labs/REBL-distro#2022-08-0211:34Kris CSo, @U06BE1L6T, the answer to your question is probably not as of now...#2022-08-0211:35souenzzoyeah but the datomic api's need to returns datafy/nav for greater dev experience, like resolve references etc#2022-08-0212:04Joe LaneREBL is not cloud only#2022-08-0213:47Joe Laneas-in, REBL works great with Datomic, which already does return datafy/nav'ed data.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-08-0216:20jumarThanks for the answers. Sounds like there's space for new tools here :)#2022-08-0212:42Andrey PopeloIs it possible to run peer-server without SSL?#2022-08-0307:39jumarPerhaps you need to set :validate-hostnames false ?
See https://forum.datomic.com/t/ssl-handshake-error-when-connecting-to-peer-server-locally/1067
(I only tried to run the peer server locally but it worked)#2022-08-0322:05Mateusz MazurczakI want to filter the result by the fact if the key do not exist or it's value (date) is lower then the provided arg.
So in pseudo code it would look like this:
[:find ?fizz
:in $ ?my-date
:where
[?fizz :fizz/buzz ?buzz]
(or (missing? $ ?fizz :fizz/date)
[(< ?now-date [?fizz :fizz/date])])]
Is that possible in the query?#2022-08-0322:39pppaulrules act like logical ors#2022-08-0322:40pppaulor statement and join-or are a bit funky in how they work, and i haven't been very successful in using them#2022-08-0322:57pppaulhmm, i can't seem to use missing? with fake data#2022-08-0323:00pppauli was trying to do some tests without making a DB, but i get errors with missing?#2022-08-0323:00pppaulhttps://docs.datomic.com/on-prem/query/query.html#rules#2022-08-0323:00pppaulyou can combine rules to form ORs i think that is a good place to start.#2022-08-0404:11Byron ClarkI think you want something like this:
[:find ?fizz
:in $ ?my-date
:where
[?fizz :fizz/buzz ?buzz]
(or-join [?fizz]
[(missing? $ ?fizz :fizz/date)]
[(< ?my-date [?fizz :fizz/date])])]#2022-08-0409:57Mateusz Mazurczak@U0394DH0S3W Yeah that's what I meant, but the case is that this code do not work in datomic as this line gives me message:
[(< ?my-date [?fizz :fizz/date])]
Unable to resolve symbol: ?fizz in this context#2022-08-0409:59Mateusz MazurczakWhile if I remove this one line it all works#2022-08-0414:30pppaulhttps://docs.datomic.com/on-prem/query/query.html#predicate-expressions there is an example of using preds in a query#2022-08-0414:31pppauli'm not sure [?fizz :fizz/date] is valid. does anyone have examples of this syntax being used in queries?#2022-08-0414:47Byron Clark[?fizz :fizz/date] is valid in a where, but I don’t think it does what you want. Sorry I didn’t pay too much attention to that when adding the or-join.
[?fizz :fizz/date] unifies ?fizz to entities that have a :fizz/date attribute but it doesn’t extract the value of the attribute.
I think this is what you want:
[:find ?fizz
:in $ ?my-date
:where
[?fizz :fizz/buzz ?buzz]
(or-join [?fizz]
[(missing? $ ?fizz :fizz/date)]
(and [?fizz :fizz/date ?fizz-date]
[(< ?my-date ?fizz-date))])]
#2022-08-0414:22FiVoHey, I am getting the following error with the client-pro library.
clojure.lang.ExceptionInfo: with db is no longer available
at datomic.client.api.async$ares.invokeStatic(async.clj:58)
at datomic.client.api.async$ares.invoke(async.clj:54)
at datomic.client.api.sync$eval2405$fn__2418.invoke(sync.clj:120)
at datomic.client.api.protocols$fn__11953$G__11871__11960.invoke(protocols.clj:126)
at datomic.client.api$with.invokeStatic(api.clj:363)
at datomic.client.api$with.invoke(api.clj:353)
the docs don't mention anything about it being deprecated. https://docs.datomic.com/client-api/datomic.client.api.html#var-with#2022-08-0414:26favilaI think that literally means “the with-db object isn’t on the server anymore”#2022-08-0414:27Joe Lane@UL638RXE2 Francis is right ^^#2022-08-0414:29FiVoOk, that makes more sense as well.#2022-08-0414:33FiVoSo how do I avoid this behaviour ?#2022-08-0414:36FiVoIt's actually not quite clear to me how this can happen.#2022-08-0507:22mx2000Hey
I am trying to get an older (few years) old datomic project to run … (using [com.datomic/datomic-pro “0.9.5407”])
But I cannot connect to my transactor! I set up the properties file correctly and started the transactor:
➜ datomic-pro-0.9.5407 bin/transactor config/dev-transactor-template.properties
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:<DB-NAME>, storing data in: data ...
System started datomic:<DB-NAME>, storing data in: data
And my app:
ERROR: AMQ214016: Failed to create netty connection
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source)
Execution error (ActiveMQNotConnectedException) at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl/createSessionFactory (ServerLocatorImpl.java:799).
AMQ119007: Cannot connect to server(s). Tried with all available servers.
#2022-08-0507:26mx2000There are also some warnings, maybe the datomic version too old?
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by io.netty.util.internal.PlatformDependent0 (file:/Users/msappler/projects/datomic-pro-0.9.5407/lib/netty-all-4.0.39.Final.jar) to field java.nio.Buffer.address
WARNING: Please consider reporting this to the maintainers of io.netty.util.internal.PlatformDependent0
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
#2022-08-0507:34mx2000Solved by using /cdn-cgi/l/email-protection instead of /cdn-cgi/l/email-protection#2022-08-0513:03jaretAlways happy to see old things work, but is there any particular reason you're looking at this old version? We've released a lot of fixes/improvements in the 6 years since 0.9.5407 was released and I'd hate for you to trip up on those issues. Happy to chat over support if you're interested in talking about using the latest @ <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection>#2022-08-0513:05mx2000I cannot use latest version, because i got the license (free one) in 2017{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-08-0514:00mx2000Is there any way to use newer version?#2022-08-0519:09jaret@U0ALH6R89 yes, you can purchase a license https://www.datomic.com/buy.html. I'd also be happy to setup a call and discuss further. We're always keen to hear what customers are using Datomic for and how our license model might fit or not fit your needs.#2022-08-0808:05Nedeljko RadovanovicHi people,
is there a way to retract users from database by doing only one transact?
[#:db{:id 17592186045418} #:db{:id 17592186045421} #:db{:id 17592186045423}]
I have their db ids but do i need to loop over this vector or can I do it in one transact?#2022-08-0808:09cl_jyou can put them in a vector and send in one tractions:
(d/transact conn {:tx-data [[:db/retractEntity 1]
[:db/retractEntity 2]
[:db/retractEntity 3]]})#2022-08-0808:13Nedeljko Radovanovichmm thank you
(d/transact
conn
{:tx-data [[:db/retract [:db/id 1]]
[:db/add "datomic.tx" :db/doc "remove old user"]]})
I tried this approach and it didnt work for me, didnt know about retractEntity, thank you, I will try it#2022-08-0811:20souenzzo• :db/retract retracts specific attributes, like [:db/retract 42 :user/name]
• :db/retractEntity is equivalent to get all the keys (keys (d/pull db [*] 42)) => [:user/name :user/id ....] and retract them.
• I recommend you to write tx-my-operation, like (defn tx-retract-user [.... ]), that returns just the array of the operation, [[:db/retract [:db/id 1]] [:db/add "datomic.tx" :db/doc "remove old user"]] in your example
• tx-data is always composable via (concat tx-retract-user-1 tx-retract-user-2) => tx-retract-user-1-and-2. Then you can use (d/transact conn {:tx-data tx-retract-user-1-and-2}) #2022-08-0811:22souenzzo
Avoid to create dead-simple functions
(defn tx-retract-user [id]
[[:db/retractEntity id]
[:db/add "datomic.tx" :db/doc "remove old user"]])
Just write these tx-my-operation if there is some complexity on it.#2022-08-0817:15Nedeljko RadovanovicThank you for your response, I will try it. ☺️#2022-08-0819:32pppaul@U2J4FRT2T i don't follow, you say to avoid writing simple functions, but your examples are simple functions.#2022-08-0819:34pppauldo you mean to not write functions, but just write out a bunch of txs in a let, then concat them?#2022-08-0819:34Nedeljko RadovanovicI think he wanted to say is that if i have complexity in code to make a separate functions to make it more simple, to split code for more simple look, please tell me if I am wrong#2022-08-0819:35Nedeljko RadovanovicTo make those “operations” functions for operations that i require, and not to write it all together in one#2022-08-0819:36Nedeljko Radovanovic@U2J4FRT2T please tell if i got it wrong#2022-08-0819:38pppaulok, i think if you are reusing the logic in a lot of places that makes sense. i find a lot of my txs are not reused, though.#2022-08-0819:57souenzzomy point is just don't write a function called tx-retract-x that do just [[:db/retract x]]
if you want to retract x, write [[:db/retract x]]
Write functions like tx-user-exit, that will: retract the user entity, mark its address as disabled, send one bye message for each active friend...{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 2")}
#2022-08-0917:57pppauli'm trying to use [:db/cas on a attr that is a db/ident val but i can't use the ident val as a placeholder for the db/id, and cas fails saying it compared my ident keyword with the db/id and they weren't the same. anyone else run into this issue? do i have to do id lookups when using cas like this?#2022-08-0917:58pppaul(db/transact db/conn [[:db/cas
17592186048479
:registration.stub/state
:registration.state/pending-principal-acceptance
:registration.state/membership-accepted
]])#2022-08-0917:58pppaul{:a :registration.stub/state,
:e 17592186048479,
:v 17592186046002,
:v-old :registration.state/pending-principal-acceptance,
:cognitect.anomalies/category :cognitect.anomalies/conflict,
:cognitect.anomalies/message "Compare failed: :registration.state/pending-principal-acceptance 17592186046002",
:db/error :db.error/cas-failed}#2022-08-0918:00ghadigotta (d/entid ) them#2022-08-0918:01ghadi@pppaul ^#2022-08-0918:03pppaulthat's what i ended up doing to get cas working, though through pull. i'll use entid instead. i thought i may just be doing something wrong in that case.#2022-08-0918:05pppaulthanks#2022-08-0918:54Ivar RefsdalAny reason why Datomic does not try to (d/entid) them itself?#2022-08-0918:57Ivar RefsdalI wrote some code in https://github.com/ivarref/double-trouble that handles this:
https://github.com/ivarref/double-trouble/blob/main/src/com/github/ivarref/double_trouble.clj#L73
(I didn't document it though.)#2022-08-1008:59souenzzomany years ago, when datomic had a "suggest a feature" portal, I suggested the feature "support eid/refs in db/cas" and it had some "upvotes".{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-08-0919:23ghadi@ivar.refsdal @pppaul my guess is either performance, or that the ident is a reference (not a value), and refs can change#2022-08-0919:35pppauli feel like this is just an oversight in the cas function#2022-08-0921:40joshkhsmall typo on thew dev-local maven configuration page 😇{:tag :div, :attrs {:class "message-reaction", :title "picard-facepalm"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "picard-facepalm", :src "https://emoji.slack-edge.com/T03RZGPFR/picard-facepalm/37b1e28762160297.gif"}, :content nil})} " 1")}
#2022-08-1022:33AthanMeasuring Transactions Throughput with a Datomic in-memory database
I am experimenting with the https://github.com/cognitect-labs/day-of-datomic-cloud/tree/master/datasets/goodbooks-10k in day-of-datomic-cloud repository and I am trying to measure the memory allocation and the time to issue transactions, i.e. find the maximal throughput measured in datoms/sec
These are my results running the process on a local machine:
• For the 3.2MB books.csv file
time: 2.9 sec for 23 cols x 10000 rows
Memory Allocated: 239MB ((mem difference from Linux system monitor for java process))
datoms inserted: 229,980transactions throughput: 229,980/2.9 datoms/sec = 79303 datoms/sec
But in this measurement time includes reading rows from the CSV file
(time (d/transact conn (with-open [f (io/reader (io/file repo-dir "books.csv"))]
(let [rows (drop 1 (csv/read-csv f))]
(mapv #(row->entity % book-schema) rows)))))
• For the 69MB ratings.csv file
;; 5,976,480 rows x 3 columns
;; 69MB ratings.csv file
;;
; Transform csv rows into 5.9 million rating entities in memory
;; Elapsed time: 11.6 secs
;; Memory Allocated: 3GB (mem difference from Linux system monitor for java process)
;; It is processing asynchronously the data, opens the file, read all rows in memory
(future (time (def rating-entities
(with-open [f (io/reader (io/file repo-dir "ratings.csv"))]
(let [rows (drop 1 (csv/read-csv f))]
(mapv row->rating rows))))))
OK now that we have the transactions set in memory we can measure the throughput
;; Elapsed time: 5m 47sec (347sec)
;; Memory Allocated: 4.1 GB (mem difference from Linux system monitor for java process)
;; datoms inserted: 23,905,976
; transactions throughput: 23,905,976/347 datoms/sec = 68893 datoms/sec
(time (doseq [chunk (partition-all 100000 rating-entities)]
;; if you want to be nice to other users on a shared system
;; (Thread/sleep 2000)
(d/transact conn chunk)))
For this setup and with only writing datoms in memory I reached a limit of about 70,000 datoms/sec
and it took more than 5m for a 69MB file with a 4GB memory footprint !!! I wouldn't dare to use a transactor with Postgres, I expect it to take a really long time and a big database size. And that seems to be a problem when one has to port tables from a relational database with millions of rows and dozens of columns.
Generally speaking it's already tough to port a medium to large production SQL (relational) database into Datomic taking in consideration, authorization/authentication, triggers, functions, constrains and data modeling and it becomes even harder when you also have to think a lot about resources and available time.
Anyway I am curious how to run https://docs.datomic.com/cloud/best.html#pipeline-transactions transactions and see what difference it makes ? Can you share some code on how to use it for this example dataset ?
PS: The machine is a workstation with Intel Xeon E5-1650 v4 @ 3.60GHz × 6 cores
The REPL of the peer is started from inside IntelliJ IDEA and when it finished processing
Total Memory Allocated: 7.3 GB (Linux System monitor for java process)
jvm-opts ["-Dvlaaad.reveal.prefs={:theme,:light} -Xm2g"]#2022-08-1104:42JohnJHow much memory was given to the peer?#2022-08-1109:10AthanThe peer is using datomic.api and the database is created in memory
[datomic.api :as d]
(def db-uri "datomic:")
(d/create-database db-uri)
(d/connect db-uri))
The JVM process is spawned from a REPL that is started from inside IntelliJ IDEA
When it finished processing Total Memory Allocated: 7.3 GB (Linux System monitor for java process)
Are you referring to some other memory setting for the peer, which exactly ? How can I view/set it ?#2022-08-1114:00lassemaattaI might be wrong, but I recall reading somewhere that the in-memory database does not have the same performance characteristics as an actual transactor + storage db. So you might want to setup a proper storage db if you want to do benchmarks. (EDIT: here's something: https://docs.datomic.com/on-prem/getting-started/dev-setup.html){:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 1")}
#2022-08-1114:05lassemaattaalso, did you specify the maximum heap size? If not, I think the jvm sets a default limit of 1/4 of system ram. I'm not sure how much useful information you can gather by just looking at the java memory usage from the systems point of view, because that doesn't really tell you how the heap is utilized (eg. is the jvm constantly running of out memory and thus doing gc all the time or not at all). There's probably other people here who can give better tips regarding this.#2022-08-1114:09jumarDefinitely agree with the above.
You need better metrics.
Also check there aren't other competing processes.
If you have nothing else, jcmd is a great tool#2022-08-1115:31dazldThe in memory DB definitely gets slower the more data you put into it - I don’t know why.#2022-08-1115:31dazldIt doesn’t seem to matter what kind of data too, and queries that shouldn’t be affected start to slow down by adding unrelated datoms#2022-08-1115:32dazldtldr - you can’t compare the inmem db against a real setup, sadly{:tag :div, :attrs {:class "message-reaction", :title "white_check_mark"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("✅")} " 1")}
#2022-08-1118:21JohnJ@U03ET6PDHCK the heap size, you can set it passing -Xmx4096m for example#2022-08-1120:05AthanThanks all for your answers, it seems I have to run a transactor to make some safe conclusions about transactions writing performance and I guess the best viable solution is DynamoDB. But datomic setup for this storage engine is another fruitcake 🙂. If I am successful on configuring and running it I will post results.
It's also important to explain quickly why I think there is such bad performance with Datomic in-memory client. It's because of the data structures. To maximize the speed of writing in memory one has to use different memory management and data structures similar to those used in pyarrow, numpy.
So, how about building an in-memory Datalog query engine on top of pyarrow ?#2022-08-1200:36AthanDynamoDB Local storage engine works fine in Linux but the configuration for Datomic proves to be a pain in.... and although I managed to make it start without a problem it fails after some time
Launching with Java options -server -Xms4g -Xmx4g -Ddatomic.printConnectionInfo=true
Starting datomic:<DB-NAME> ...
System started datomic:<DB-NAME>
and this is what I got from trying to connect from datomic Java shell
Datomic Java Shell
Type (); for help.
datomic % uri = "datomic:";
<datomic:>
datomic % Peer.createDatabase(uri);
// Error: // Uncaught Exception: bsh.TargetError: Method Invocation Peer.createDatabase : at Line: 2 : in file: <unknown file> : Peer .createDatabase ( uri )
Target exception: clojure.lang.ExceptionInfo: Error communicating with HOST localhost on PORT 8031 {:alt-host nil, :peer-version 2, :password "<redacted>", :username "1cW+SaWdseBnbsJieDkd0NCY0MdBVfrEipe+0GsXH4Y=", :port 8031, :host "localhost", :version "1.0.6397", :timestamp 1660264442167, :encrypt-channel false}
clojure.lang.ExceptionInfo: Error communicating with HOST localhost on PORT 8031 {:alt-host nil, :peer-version 2, :password "<redacted>", :username "1cW+SaWdseBnbsJieDkd0NCY0MdBVfrEipe+0GsXH4Y=", :port 8031, :host "localhost", :version "1.0.6397", :timestamp 1660264442167, :encrypt-channel false}
#2022-08-1201:00AthanIt was far more easy to deploy, configure and test transaction throughput of a similar key-value storage engine (LMDB) of https://github.com/juji-io/datalevin Datalog DBMS
;; 5.2 sec for 23 cols x 10000 rows
;; 3.2MB books.csv file
;; Elapsed time: 5.2 secs
;; datoms inserted: 229,956
;; transactions throughput: 229,956/5.2 datoms/sec = 44222 datoms/sec
So I would expect Datomic transactor on AWS DynamoDB Local to have similar performance. Which means that one has to https://docs.datomic.com/on-prem/operation/capacity.html#dynamodb and configure accordingly peers/pipeline etc... And sooner or later you realize that this is the price you pay for having the components of a DBMS separated.
I have met similar problems in the past when I was testing the writing performance of redis KV storage engine. All these KV engines (redis, dynamodb, lmdb) are very good on point queries but they perform really bad when you want to write (import) a big volume of data. You may argue that writing performance is not critical for a transactional (OLTP) DBMS but it becomes super important when you want to import your data from another system/project, or you want to integrate a big volume of data from other sources, or you want to do analytics without adding another storage engine.
In fact what we are discussing here is the price you pay for having a flexible universal data model based on EAV/RDF triplets. Which is a similar case when you try to construct a relational, tuple based data model on top of a KV storage engine or object like memory structures (Python/Clojure). The physical layout must be appropriate for such data model and the best candidate I found from my personal research and experiments is to use a columnar layout.
Why not adding support for really columnar database engines, such as Clickhouse or Singlestore(MemSQL), to serve as Datomic storage engines ?{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 1")}
#2022-08-1104:52jumarI'm following the Max Datom tutorial - level 10: https://max-datom.com/#/DB612D03-9AF7-49B7-98B5-4C77ADE029D2
I have troubles making a proper query.
They basically want to count posts of given user.
I tried the following query but it only returns 1 as a count - it should return 3 because this user has 3 posts associated with them.
(d/q '[:find ?user-name (count ?post-author)
:where
[?user :user/id #uuid "1B341635-BE22-4ACC-AE5B-D81D8B1B7678"]
[?user :user/first+last-name ?user-name]
[?post :post/author ?user]
[?post :post/author ?post-author]
]
db)
;; => [[["E. L." "Mainframe"] 1]]#2022-08-1104:56jumarI'm wondering whether I should simply count :post/id instead of :post/author - but they say :post/author in the tutorial, which is confusing...
(d/q '[:find ?user-name (count ?post-id)
:where
[?user :user/id #uuid "1B341635-BE22-4ACC-AE5B-D81D8B1B7678"]
[?user :user/first+last-name ?user-name]
[?post :post/author ?user]
[?post :post/id ?post-id]
]
(db))
;; => [[["E. L." "Mainframe"] 3]]#2022-08-1105:02jumarHere's my code: https://github.com/jumarko/datomic-starter-sample/pull/8/files#diff-73408bf9944a77167caf09f14d0b5955af225631aed9696e3fad48d6cd0972e0R120-R155#2022-08-1105:07shane[?post :post/author ?user] is already making the connection between user and posts - you don't need to go into the post entity like that to pull out a field value.#2022-08-1105:17jumar@U0V0HQWAE I'm not sure I understand. How do I count the posts then?#2022-08-1105:18jumarAh, it's just (count ?post) 🙂#2022-08-1113:04defaWe just ran into a production issue with pedestal/lacinia/datomic service. it worked fine until a few days ago and now we see a lot of the following errors:
ERROR: Transactor not available {:cognitect.anomalies/category :cognitect.anomalies/unavailable, :cognitect.anomalies/message Transactor not available}
#error {
:cause Transactor not available
:data {:cognitect.anomalies/category :cognitect.anomalies/unavailable, :cognitect.anomalies/message Transactor not available}
:via
[{:type clojure.lang.ExceptionInfo
:message Transactor not available
:data {:cognitect.anomalies/category :cognitect.anomalies/unavailable, :cognitect.anomalies/message Transactor not available}
:at [datomic.peer$transactor_unavailable invokeStatic peer.clj 167]}]
:trace
[[datomic.peer$transactor_unavailable invokeStatic peer.clj 167]
[datomic.peer$transactor_unavailable invoke peer.clj 164]
[datomic.peer.Connection transactAsync peer.clj 328]
[datomic.peer.Connection transact peer.clj 311]
[datomic.api$transact invokeStatic api.clj 107]
[datomic.api$transact invoke api.clj 105]
...
[java.lang.Thread run Thread.java 829]]}
2022-08-10 13:49:37,916 [ERROR] org.apache.activemq.artemis.core.client - AMQ214016: Failed to create netty connection
java.nio.channels.ClosedChannelException: null
at io.netty.handler.ssl.SslHandler.channelInactive(SslHandler.java:1063)
...
at java.base/java.lang.Thread.run(Thread.java:829)
2022-08-10 13:49:38,310 [INFO ] datomic.common - {:event :common/retry, :backoff 1000, :attempts 4, :max-retries 9223372036854775807, :pid 2528, :tid 143}
clojure.lang.ExceptionInfo: Error communicating with HOST on PORT 4334
at datomic.connector$endpoint_error.invokeStatic(connector.clj:53)
...
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.activemq.artemis.api.core.ActiveMQNotConnectedException: AMQ119007: Cannot connect to server(s). Tried with all available servers.
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:787)
at datomic.artemis_client$create_session_factory.invokeStatic(artemis_client.clj:114)
at datomic.artemis_client$create_session_factory.invoke(artemis_client.clj:104)
at datomic.connector$try_hornet_connect.invokeStatic(connector.clj:96)
at datomic.connector$try_hornet_connect.invoke(connector.clj:81)
at datomic.connector$create_hornet_factory.invokeStatic(connector.clj:128)
... 21 common frames omitted
2022-08-10 13:49:38,311 [WARN ] datomic.common - ... caused by ...
org.apache.activemq.artemis.api.core.ActiveMQNotConnectedException: AMQ119007: Cannot connect to server(s). Tried with all available servers.
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:787)
at datomic.artemis_client$create_session_factory.invokeStatic(artemis_client.clj:114)
...
at java.base/java.lang.Thread.run(Thread.java:829)
...
2022-08-10 13:50:33,608 [INFO ] datomic.common - {:event :common/retry, :backoff 1000, :attempts 1, :max-retries 9223372036854775807, :pid 2528, :tid 143}
clojure.lang.ExceptionInfo: Error communicating with HOST transactor.host.name.removed on PORT 4334
at datomic.connector$endpoint_error.invokeStatic(connector.clj:53)
...
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.activemq.artemis.api.core.ActiveMQNotConnectedException: AMQ119010: Connection is destroyed
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:335)
...
at clojure.lang.RestFn.invoke(RestFn.java:464)
at datomic.connector.TransactorHornetConnector$fn__10264.invoke(connector.clj:214)
... 17 common frames omitted
Any ideas what’s going on here?#2022-08-1113:06defaThis is an Datomic on-prem and management has decided to not renew the license because everything is running fine :face_with_rolling_eyes:#2022-08-1113:07defaI’m going to convince them to get proper support and bugfixes by renewing the license but meanwhile some help would be greatly appreciated.#2022-08-1113:08defaThe the pedestal-based server, datomic-transactor and the postgres-db are all running on separate virtual machines.#2022-08-1113:37manutter51I’m not a very good resource for datomic debugging, but we’ve had some Transactor Not Available issues earlier in the summer and I can pass on what I’ve overheard. Check for out-of-memory issues and out-of-disk-space issues. Also check the network connections between the transactor and the backend store — we finally tracked our issues down to a bad network switch.{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 1")}
#2022-08-1114:13defaThanks, I already checked that. At least in my Grafana dashboard (node exporter) I can so no anomalies. But a broken switch might not be detected that way. Thanks.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-08-1214:30pppaulhow do i find the schema for a transaction? (the the ent that has :txInstant and some other keys...) i just don't know what keys it has and i can't find documentation on it. i can't even find examples of how to add a doc string to a transaction#2022-08-1214:43pppauli found these
{:db/cardinality {:db/id 35},
:db/doc "Attribute whose value is a :db.type/instant. A :db/txInstant is recorded automatically with every transaction.",
:db/id 50,
:db/ident :db/txInstant,
:db/index true,
:db/valueType {:db/id 25}}
{:db/doc "Partition used to store data about transactions. Transaction data always includes a :db/txInstant which is the transaction's timestamp, and can be extended to store other information at transaction granularity.",
:db/id 3,
:db/ident :db.part/tx}#2022-08-1214:43pppaulcan't find anything else on tx schema#2022-08-1214:47pppaulis it the case that a transaction can just have any data on it so long as there is a schema for it in in DB?#2022-08-1214:58favilaA transaction is an entity. Just like any entity, it can be in the E or V slot of any datom#2022-08-1214:58favilaTo add an assertion to the current transaction, use the tempid "datomic.tx"#2022-08-1214:59pppaulok, i thought that transactions may be a bit more special than that. thanks#2022-08-1215:01favilaThey are special only in that their ids must be in the tx partition, only they may have :db/txInstant assertions (this assertion is what makes it a transaction), and :db/txInstant cannot decrease in value relative to the previous transaction.#2022-08-1215:05pppaulthat is something i ran into when seeding my DB#2022-08-1217:39lassemaattaDatomic docs often say something like ”the variable is bound”, eg. wrt rule bindings. What exactly does bound mean in this context? #2022-08-1217:42favilaIt means that values are known for it.#2022-08-1217:42lassemaattaI’ve been reading about rules and required rule bindings and I’m not quite sure I understand the description#2022-08-1217:43favilaEvery variable mentioned in a clause is bound (i.e. some set of values was assigned to it) at the end of the clause’s evaluation#2022-08-1217:44favilabut not all may be bound on entering a clause#2022-08-1217:45favilabecause some clauses produce the bindings (e.g. by pattern-matching, by unifying other already-bound values, by executing a function and binding the result, etc)#2022-08-1217:45lassemaattaSo if I specify that a particular rule argument is required, I should have a where clause (either before or after) which refers to it?#2022-08-1217:46favilaby “required” I assume you mean “must be bound”, i.e. this rule declaration syntax (rule-name [?must ?bind ?these] ?maybe-unbound)?#2022-08-1217:47favilathis says that ?must ?bind and ?these must have values before entering the rule.#2022-08-1217:47lassemaattaYeah#2022-08-1217:47favilathe rule itself is not allowed to bind them#2022-08-1217:47favilait can only read/unify on their values#2022-08-1217:47favila?maybe-unbound may or may not be bound, doesn’t matter#2022-08-1217:48favilabut it will be bound by the time the rule finishes evaluating#2022-08-1217:49lassemaattaRelated question: what exactly does unify mean? Again, it’s one of those words used often, yet I dont quite grasp it#2022-08-1217:53favila:in ?a
; bound by args: ?a
:where
; bound: ?a
[?a ?b ?c]
; bound: ?a ?b ?c
[?c ?d]
; bound: ?a ?b ?c ?d
(myrule ?x ?y)
; bound: ?a ?b ?c ?d ?x ?y
[(myfn ?x) ?g]
; bound: ?a ?b ?c ?d ?x ?y ?g
#2022-08-1217:54lassemaattaWhat happens if you first refer to a rule, which has a required variable, and then provide a normal where clause which binds the variable? Does it first ”execute” the latter clause and then ”execute” the rule?#2022-08-1217:54faviladatomic will only sometimes push clauses down not up#2022-08-1217:55favilaso you will just get an error#2022-08-1217:55favilaI wouldn’t rely on clause reordering though. the point of that syntax is to make sure that the rule doesn’t evaluate in an inefficient order#2022-08-1217:56favilaclauses are generally not reordered, so selectivity of the first clause is very important#2022-08-1217:57favilainside a rule, it can be less predictable than in a :where how the rule was “invoked” in various queries, so someone may have left the vars for the first clauses unbound, causing a potentially large result set.#2022-08-1217:57favilathat syntax is a defense against that#2022-08-1217:58favilaa smarter system would reorder the clause evaluation in the rule based on selectivity or cardinality information, but datomic does not do that#2022-08-1217:59favilare: unification, imagine that you have a table, and each variable in the query is a column#2022-08-1218:00favilawhen a variable is bound, it adds rows to the query with values for that column#2022-08-1218:00favilathe unbound columns remain “blank”#2022-08-1218:01favilawhen you “unify” vars, you discard entire rows where a clause is not satisfied for the corresponding vars mentioned#2022-08-1218:02lassemaattaah, right#2022-08-1218:02lassemaattaoh and thanks a lot for explaining this stuff, I should really read a prolog book or something 🙂#2022-08-1218:02favilae.g. [?a 1] if ?a is unbound gives you a table of all (unique) ?a where the attribute slot is 1.#2022-08-1218:03favila[(+ ?a 1) ?b] if ?b is unbound just adds a column ?b with +1 every ?a value for every row#2022-08-1218:03favilabut if it is bound, it discards any rows where ?b is not one more than ?a#2022-08-1218:06lassemaattathe reason I'm reading about this (and asking silly questions) is that earlier today I was looking at some code in review where it had a where clause with a) first a few rules which required some bound variables and b) later a very lax normal clauses which bound those variables. I was a bit confused because I had understood that the variables should be bound before the rule is invoked but apparently the code worked. but later we noticed that with a larger db size it was also quite slow. does this make any sense?#2022-08-1218:08lassemaatta(I should really test this in a repl instead of just pondering about it..)#2022-08-1218:08favilayep. It sounds like it pushed down the rule, but only as far as needed to get something bound, and you just got lucky that it was enough to satisfy the requirements?#2022-08-1218:09lassemaattayeah, that would explain the poor performance. It couldn't satisfy the requirements of the rules, until it found the clauses it could bind (returning a huge set of data) which it finally ran through the rules#2022-08-1218:10favilaare you aware of these? https://docs.datomic.com/cloud/best.html#datomic-query#2022-08-1218:10lassemaattayeah#2022-08-1218:11lassemaattaalthough reading the datomic documentation sometimes feels a bit like reading clojure spec docs: it all makes sense but first you have to do the work to really understand it yourself 🙂#2022-08-1218:18lassemaattain any case, thanks a lot for helping me#2022-08-1218:35favilahappy to help#2022-08-1220:58nandoRegarding dev-local, can I simply copy and paste the the db.log and log.idx files under the system and database directories to move a database from one computer to another? Any potential issues with that approach?#2022-08-1221:01ghadino issue, you can just plop them over @nando {:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-08-1221:03nandoThank you!#2022-08-1222:49pppaulI'm trying to make a view to display all the entities i deleted in a transaction. i've been looking over the docs and at the datomic API, and i'm not very sure how to do this. i've used filtered DBs before, but i need 1 or 2 extra steps and i'm not sure what they are. right now i just have the entity that is the tx where the deletions occured. i need to get the IDs of the things that were deleted.
this is what i have so far
(let [t (->> (d/entity db entity-id)
(d/entity-db)
(d/basis-t))]
(->> (d/tx-range (d/log conn) nil t)
first
:data
(map (fn [[e a v _]]
[e a v]))))
the datoms i'm getting back look like they are from the start of my DB creation
(let [t (->> (d/entity db entity-id)
(d/entity-db)
(d/basis-t))]
(->> (d/tx-range (d/log conn) nil t)
;;first
(take-last 5)
(mapcat :data)
(map (fn [[e a v _]]
[e a v]))))
if i do the above i get datoms that seem to have the right dates, but they don't look like my deleted data
(d/tx-range (d/log conn) (dec t) (inc t))
this seems to give me the deleted data. it's still a mess to look at cus there are no idents for attributes.
can i do a pull on this data somehow?
-----------------------SOLVED---------------------
(let [t (->> (d/entity db entity-id)
(d/entity-db)
(d/basis-t))
deleted-ids (->>
(d/tx-range (d/log conn) t (inc t))
(mapcat :data)
(map (fn [[e a v _]]
e))
(into #{})
vec)]
(d/pull-many
(d/as-of db (dec t))
'[*]
deleted-ids))
yey!#2022-08-1301:02favilaWhen you say “deleted entities”, do you mean entities which had retractions?#2022-08-1301:03favilaAnd you just want those entity ids? Or you want to pull from them in some way?#2022-08-1312:07pppaulyeah, retractions that i want to pull, which is what i showed in the code#2022-08-1223:33pppaulwould really appreciate alternative ways to do this. cus that's a pretty crazy path i went down#2022-08-1511:52robert-stuttafordhas any work been done to make Datomic on-prem's metrics and logging compatible with OpenTelemetry? we would love to use http://Honeycomb.io but not having the database in the picture is a pretty big blocker 😅{:tag :div, :attrs {:class "message-reaction", :title "100"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("💯")} " 3")}
#2022-08-1515:08Drew VerleeI think of an index as a way group a set of things by some property, given a uuid is about randomness, it's unclear to me when a https://docs.datomic.com/on-prem/schema/identity.html#squuids aka a uuid with an ordered time component, would be useful. except for a general query against all such uuids? like give me everything in the db from the last week? It's really indexed by just the time aspect right?#2022-08-1515:28favilaThe point of a squuid is to reduce garbage from index fragmentation, full stop{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2022-08-1515:29favilanew values accumulate on the “tail” of the tree vs randomly distributed throughout the entire tree{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2022-08-1515:38Drew VerleeSo it will improve performance in general query cases, not just the time based one i was thinking of?#2022-08-1515:45favilait won’t improve performance in queries, except in pretty specific cases{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2022-08-1515:46favilait will reduce the amount of garbage segments the transactor produces when it makes an index, and may make preparing the new indexes faster also.{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2022-08-1518:41Drew VerleeWhat type is a db/id? can it be a long id? i'm trying to make sense of this https://github.com/tonsky/datascript/issues/292 about datascript, i'm not sure that they mean and i'm worried im missing something. by jvm do they mean datomic? why would the jvm have any say on anything "id" related?
> It seems that long ids are no longer supported on JVM since 0.18.0. I also added a validation that throws on both JVM and JS if value is out of range.
#2022-08-1519:25favilaThe comment means that datascript rejects entity ids in the long range (specifically > 0x7FFFFFFF){:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-08-1519:27favilait rejects them on the JVM even though they would be fine, because they would not be fine on js/cljs#2022-08-1519:28Drew Verleeyep, that makes sense and mirrors what i'm seeing on my end to. datomic says an eid should be a nat-int? and that seems to include the eids that are causing exceptions in datascript.
Thanks for the help :thumbsup:#2022-08-1605:24jumar@jarrodctaylor first, thanks for the amazing Max Datom tutorial.
I'm working on Level 12 and got stuck on this error :
clojure.lang.ExceptionInfo: 'comment-count-str' needs to be listed under :xforms in datomic/ion-config.edn {:cognitect.anomalies/category :cognitect.anomalies/forbidden, :cognitect.anomalies/message "'comment-count-str' needs to be listed under :xforms in datomic/ion-config.edn"}
Reading :xform option docs, it seems that the function indeed needs to be in specified in the config file - I thought it's gonna be there,
but maybe this restriction is something that was only added after the course was created?#2022-08-1605:24jumarHere's my query:
(d/q '[:find (pull ?posts [{:post/author [:user/first+last-name]}
[:post/comments :xform comment-count-str]])
:where [?posts :post/author _]]
db)#2022-08-1605:33jumarI'm also facing the same problem when running on my machine: https://github.com/jumarko/datomic-starter-sample/pull/10#2022-08-1605:39jumarFound the answer here: https://www.reddit.com/r/Clojure/comments/tw66df/comment/i3kldxb/
The function call must be fully qualified - that feels a bit odd, compared to normal clojure function calls...#2022-08-1614:18jarrodctaylorGlad you got it sorted out. Thanks for working through the app 🙂#2022-08-1610:03rende11Hi! What is the best way to make optional filtering in query? The Desired behavior is when vector with user-group/uids is empty or nil - all users are selected, otherwise select users in provided groups.
(d/q '[:find ?uid
:in $ [?ugi-uid ...]
:where
[?u :user/uid ?uid]
[?ug :user-group/users ?u]
[?ug :user-group/uid ?ugi-uid]]
(user/ctx-db) [#uuid "622f8a38-91b6-4907-b281-62db36568454"])
#2022-08-1612:14thumbnailJust have two paths, one with filtering and one without. Resulting in two clear queries{:tag :div, :attrs {:class "message-reaction", :title "point_up"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("☝️")} " 1")}
#2022-08-1619:12Mitchell HarrisWe’ve used the fact that the query is data to construct the query in code based on desired filters.#2022-08-1707:34robert-stuttafordi recommend simply writing a different function for each situation, and not trying to invent query-as-data composition logic about it. you'll thank me later 🙂
;; original
(d/q '[:find ?uid
:in $ [?ugi-uid ...]
:where
[?u :user/uid ?uid]
[?ug :user-group/uid ?ugi-uid]
[?ug :user-group/users ?u]]
(user/ctx-db) [#uuid "622f8a38-91b6-4907-b281-62db36568454"])
;; this is slow; it'll find ALL the users, then find all the matching groups, and then finally filter ALL users by those groups.
;; for performance; we should restrict the scope as early as possible to reduce the number of datoms that are considered;
;; first find the groups that match the ids, then find all the users that are in those groups, and then only return those users' ids.
(d/q '[:find ?uid
:in $ [?ugi-uid ...]
:where
[?ug :user-group/uid ?ugi-uid]
[?ug :user-group/users ?u]
[?u :user/uid ?uid]]
(user/ctx-db) [#uuid "622f8a38-91b6-4907-b281-62db36568454"])
;; when no groups are specified, the query is a lot simpler:
(d/q '[:find ?uid
:in $
:where
[?u :user/uid ?uid]]
(user/ctx-db))
;; or far more simply (i assume :user/uid is indexed):
(map :v (d/datoms (user/ctx-db) :avet :user/uid))
;; so the final code would be two functions:
(defn user-ids-by-groups [db group-ids]
(d/q '[:find ?uid
:in $ [?ugi-uid ...]
:where
[?ug :user-group/uid ?ugi-uid]
[?ug :user-group/users ?u]
[?u :user/uid ?uid]]
db group-ids))
(defn all-user-ids [db]
(map :v (d/datoms db :avet :user/uid)))#2022-08-1709:00rende11@U0509NKGK That works in simple cases, if I have 2+ conditions I will struggling with it#2022-08-1709:44Ivar RefsdalI've been using d/query which takes a map as input with parameters as :query :where
And then I'm putting it together on the search input variables with cond->.
I'm doing something like this essentially:
(defn add-query [org new]
(merge-with into org new))
(comment
(let [a 1
b nil
c 3]
(into (sorted-map)
(cond->
'{:find [[(pull ?e pattern) ...]]
:in [$ pattern]
:where []
:args []}
(some? a) (add-query {:where ['[?e :e/a ?a]]
:in ['?a]
:args [a]})
(some? b) (add-query {:where ['[?e :e/b ?b]]
:in ['?b]
:args [b]})
(some? c) (add-query {:where ['[?e :e/c ?c]]
:in ['?c]
:args [c]})))))#2022-08-1709:45Ivar RefsdalNote that the actual params for d/query is slightly different:
https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/query
The above was just an example, but should be easily adaptable to real world datomic.api/query usage#2022-08-1709:45Ivar RefsdalHope that helps 🙂#2022-08-1709:49Ivar RefsdalBe careful with your queries though. I've been OOMing in production too many times.#2022-08-1814:21Ivar RefsdalFull working example here:
(ns backend.datomic-search-demo
(:require [datomic.api :as d]))
(def conn
(let [uri (str "datomic:")]
(d/delete-database uri)
(d/create-database uri)
(d/connect uri)))
(def schema
[#:db{:ident :e/a, :cardinality :db.cardinality/one, :valueType :db.type/long}
#:db{:ident :e/b, :cardinality :db.cardinality/one, :valueType :db.type/long}
#:db{:ident :e/c, :cardinality :db.cardinality/one, :valueType :db.type/long}
#:db{:ident :e/name, :cardinality :db.cardinality/one, :valueType :db.type/string}])
@(d/transact conn schema)
(defn add-query [org new]
(merge-with into org new))
@(d/transact conn [{:e/a 1
:e/b 2
:e/name "First"}
{:e/c 3
:e/b 2
:e/a 1
:e/name "Second"}])
(defn make-query [{:keys [a b c]}]
(let [query (cond->
'{:find [[(pull ?e [:*]) ...]]
:in [$]
:where []
:args []}
(some? a) (add-query {:where ['[?e :e/a ?a]]
:in ['?a]
:args [a]})
(some? b) (add-query {:where ['[?e :e/b ?b]]
:in ['?b]
:args [b]})
(some? c) (add-query {:where ['[?e :e/c ?c]]
:in ['?c]
:args [c]}))]
{:args (:args query)
:query (select-keys query [:find :in :where])}))
(comment
(-> (make-query {:a 1})
(update :args (partial into [(d/db conn)]))
(d/query)))
(comment
(-> (make-query {:a 1 :c 3})
(update :args (partial into [(d/db conn)]))
(d/query)))
Hope that helps @U4U68ADKR#2022-08-1818:12rende11Thx @UGJE0MM0W! My current solution is similar - just build query map with cond-> . Nice tip - use add-query - I'll take it){:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-08-1707:44seepelI'm curious, do folks find it more useful to create functions that perform queries or to create variables that hold queries. For example do you prefer
(defn find-by-email [db email]
(d/q '[:find ?e
:in $ ?email
:where [?e :email ?email]]
db
email))
or
(def find-by-email '[:find ?e
:in $ ?email
:where [?e :email ?email]])#2022-08-1707:49Christian JohansenThe first, by far, since the query depends on the email parameter. Sticking the query in a var and then using it elsewhere will make the dependency on the var less obvious.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 3")}
#2022-08-1714:33Linus EricssonIf you are to use the query in var format, consider to use the map format - its mich easier to do introspection. I have used that map to add extra parameters (sorting etc)#2022-08-1802:55steveb8nfn for me too. allows validation of args#2022-08-1804:41seepelThanks for the feedback! Makes sense to me. If anyone has a counter argument I’d still love to hear it!#2022-08-1806:32steveb8nI’ve got one for you. I frequently compose queries using pull expressions. those pull expressions can be def’d since they don’t have any dependencies#2022-08-1908:24seepelAh interesting, I could see that being useful. By pull expressions do you mean something like this?
(def everything '[*])
(d/q '[:find (pull ?u pattern)
:in $ pattern
:where [?u :user/email _]]
db
everything)#2022-08-1908:32steveb8nyes that’s one way to do it. I tend to use app-template to merge them in#2022-08-1908:34steveb8nI try to use :in values for :where clauses only but that’s just a personal preference#2022-08-1908:37seepelWhat is app-template?#2022-08-1908:48steveb8nhttps://clojuredocs.org/clojure.template/apply-template sorry it’s Fri-eve here, I should not drink and slack#2022-08-1909:09seepelAwesome, thanks so much!
🍻 Cheers, for what it's worth I'm glad you diid!#2022-08-1723:08tony.kayI’ve noticed that the official API docs (https://docs.datomic.com/client-api/datomic.client.api.html#var-db) on d/db say that there is ILookup access to a key named :db-name. That is nil for me in Cloud and dev (memory) databases. I see a :database-id in production databases, and an :id in memory ones. Not sure where the best place to report that is, so I’m saying it here in hopes someone on the core team will pick it up. I’m doing caching of things based on which database something comes from, so having a key I can safely derive from the db that isn’t the db itself is important and useful. For now I’ve been using the :database-id, but that worries me since it is an undocumented key.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-08-2206:56jumarGoing through the Max Datom tutorial again, https://max-datom.com/#/40A6D16E-2FB4-4F8E-898F-33BF6F9CC4E0.
While my query got accepted, I have some troubles to pull all the data they show.
Specifically, I would like to pull all the attributes from a nested entity.
More details in the thread.#2022-08-2206:57jumarThis is the initial query that needs to be extended:
(def transfer-id #uuid "59B9C791-74CE-4C51-A4BC-EF6D06BEE2DB")
(d/q '[:find (pull ?e [*])
:in $ ?transfer-id
:where [?e :transfer/id]] db transfer-id)
They say:
> Pull all :transfer/from and :transfer/to data including :account/owner for a recently reported transfer :transfer/id #uuid "59B9C791-74CE-4C51-A4BC-EF6D06BEE2DBA"
(note the uuid here is incorrect, it has extra "A")#2022-08-2207:18jumarAfter quite a bit of struggle, I ended up with this:
(d/q '[:find (pull ?transfer [*
{:transfer/from [* {:account/owner [*]}]}
{:transfer/to [* {:account/owner [*]}]}])
:in $ ?transfer-id
:where
[?transfer :transfer/id ?transfer-id]
[?transfer :transfer/to ?to-account]
[?transfer :transfer/from ?from-account]] db transfer-id)
Thinking about this more, the difference I see on my machine is might be because I have incomplete data
They show this result:
[[{:db/id 92358976733273,
:transfer/from
{:db/id 92358976733272,
:account/balance 8900,
:account/id #uuid "5164b8da-2fe4-41da-a5fd-1a697be1d2dd",
:account/owner
{:db/id 92358976733269,
:user/first-name "Sonny",
:user/id #uuid "afb83133-3a2e-40ce-91f8-2de4f61361de",
:user/last-name "Diskon"}},
:transfer/id #uuid "59b9c791-74ce-4c51-a4bc-ef6d06bee2db",
:transfer/to
{:db/id 92358976733271,
:account/balance 2300,
:account/id #uuid "d381dc80-c582-45eb-89e9-f6e188a71a29",
:account/owner
{:db/id 92358976733268,
:user/first-name "Muhammad",
:user/id #uuid "bfe00de4-bc19-4395-ba3b-2384ecf1a569",
:user/last-name "CD"}},
:transfer/amount 1000}]]
I get the same but :account/owner entities aren't expanded (they are only db ids).#2022-08-2207:19jumarSo a different question: is there a simpler way to achieve what I'm after.
Again, my query is this:
(d/q '[:find (pull ?transfer [*
{:transfer/from [*]}
{:transfer/to [*]}])
:in $ ?transfer-id
:where
[?transfer :transfer/id ?transfer-id]
[?transfer :transfer/to ?to-account]
[?transfer :transfer/from ?from-account]] db transfer-id)#2022-08-2208:00thumbnailYou don’t need to bind ?to-account or ?from-account, but you do need to pull :account/owner too in order to get the correct query response#2022-08-2208:58jumarAh, of course - the other two clauses were a relict from earlier experiment.
I'm not sure I got the part about "need to pull :account/owner" - how would I do that?
This is what I have right now
(d/q '[:find (pull ?transfer [* {:transfer/from [*] :transfer/to [*]}])
:in $ ?transfer-id
:where [?transfer :transfer/id ?transfer-id]]
(db) transfer-id)
#2022-08-2210:45thumbnailYou can add {:account/owner [*]} to the pull expression for both transfer/from and transfer/to#2022-08-2210:47jumarIt doesn't seem to work or I didn't understand you.
The first query already works and is what I have.
The second one doesn't:
(d/q '[:find (pull ?transfer [* {:transfer/from [*] :transfer/to [*]}])
:in $ ?transfer-id
:where [?transfer :transfer/id ?transfer-id]]
(db) transfer-id)
;; => [[{:db/id 92358976733273,
;; :transfer/id #uuid "59b9c791-74ce-4c51-a4bc-ef6d06bee2db",
;; :transfer/from
;; {:db/id 92358976733272,
;; :account/id #uuid "5164b8da-2fe4-41da-a5fd-1a697be1d2dd",
;; :account/balance 8900,
;; :account/owner #:db{:id 92358976733269}},
;; :transfer/to
;; {:db/id 92358976733271,
;; :account/id #uuid "d381dc80-c582-45eb-89e9-f6e188a71a29",
;; :account/balance 2300,
;; :account/owner #:db{:id 92358976733268}},
;; :transfer/amount 1000}]]
(d/q '[:find (pull ?transfer [* {:account/owner [*]}])
:in $ ?transfer-id
:where [?transfer :transfer/id ?transfer-id]]
(db) transfer-id)
;; => [[{:db/id 92358976733273,
;; :transfer/id #uuid "59b9c791-74ce-4c51-a4bc-ef6d06bee2db",
;; :transfer/from #:db{:id 92358976733272},
;; :transfer/to #:db{:id 92358976733271},
;; :transfer/amount 1000}]]
#2022-08-2211:25thumbnailThe pull expression you’re looking for is something like this:
[* {:transfer/from [*, {:account/owner [*]}] :transfer/to [*, {:account/owner [*]}]}]#2022-08-2211:26thumbnailthat way you’re also pulling all the attributes in the account/owner reference#2022-08-2212:12jumarOh, I see - I tried that before and it didn't make a difference, at least as far as the tutorial goes.
It seems like the simpler version returned the exact same data: [* {:transfer/from [*] :transfer/to [*]}]#2022-08-2212:30thumbnailI passed the challenge with the pull expression i posted though 👀#2022-08-2218:11jumarWhat I meant is that they both work 🙂
So I lean towards the shorter version.#2022-08-2219:17thumbnailMaybe im misunderstanding, but your pe wont pull the attributes of the account owner, which is required by the challenge, no :thinking_face:?#2022-08-2303:49jumarYou are right 🙂
I thought my query was enough because Max Datom accepted it - i'm not sure why.
It pretended it returns all the data but in fact it doesn't.
After I added the relevant user (:account/owner) data to my local db I can clearly see,
that my query doesn't return any data about :account/owner apart from :db/id.
If I add these extra bits then it works.
This is then my final version:
(d/q '[:find (pull ?transfer [* {:transfer/from [* {:account/owner [*]}]
:transfer/to [* {:account/owner [*]}]}])
:in $ ?transfer-id
:where [?transfer :transfer/id ?transfer-id]]
(db) transfer-id)
;;=>
[[{:db/id 92358976733273,
:transfer/id #uuid "59b9c791-74ce-4c51-a4bc-ef6d06bee2db",
:transfer/from
{:db/id 92358976733272,
:account/id #uuid "5164b8da-2fe4-41da-a5fd-1a697be1d2dd",
:account/balance 8900,
:account/owner
{:db/id 92358976733269,
:user/id #uuid "afb83133-3a2e-40ce-91f8-2de4f61361de",
:user/first-name "Sonny",
:user/last-name "Diskon",
:user/first+last-name ["Sonny" "Diskon"]}},
:transfer/to
{:db/id 92358976733271,
:account/id #uuid "d381dc80-c582-45eb-89e9-f6e188a71a29",
:account/balance 2300,
:account/owner
{:db/id 92358976733268,
:user/id #uuid "bfe00de4-bc19-4395-ba3b-2384ecf1a569",
:user/first-name "Muhammad",
:user/last-name "CD",
:user/first+last-name ["Muhammad" "CD"]}},
:transfer/amount 1000}]]
So takeaway is that * basically pulls only the direct entity's attributes,
but not attribute of entity's components?#2022-08-2304:40thumbnail🙂 nice, glad you managed! And yes that's basically it. This is the relevant part of the doc https://docs.datomic.com/cloud/query/query-pull.html#nesting#2022-08-2221:41Nedeljko RadovanovicHey people,
I am having a big time issue that I can't fix..
So I want to switch from mem database to a dev-local.. but when I downloaded cogni dev tools and finished their tutorial, in code I get this message..
Execution error (FileNotFoundException) at datomic.client.api.impl/serialized-require* (impl.clj:16).
Could not locate datomic/dev_local/impl__init.class, datomic/dev_local/impl.clj or datomic/dev_local/impl.cljc on classpath. Please check that namespaces with dashes use underscores in the Clojure file name.
I tried to find a fix on google and in documentation but I could not find a solution, I am not using datomic cloud, I am trying to connect via transactor locally.
I even followed this tutorial https://www.youtube.com/watch?v=QYJeHyd47tM&t=278s&ab_channel=EngineeringwithV for connection dev-local and It still doesn't work..
I hope someone know the solution and I thank you for your time...#2022-08-2302:41jarrodctaylorFirst make sure you have the dependency specified correctly in your deps.edn file. Something along the lines of
{:paths ["src"]
:deps {com.datomic/dev-local {:mvn/version "1.0.243"}}}#2022-08-2309:55timoHi there, I am struggling with a 4TiB oracle-DB behind Datomic. I have a huge test-db as well, where I restored a backup into a fresh oracle-db and it only used 10% from the actual test-db. So my assumption is, that the prod-db is only a fraction after restore as well.
Why is that? Is it possible to reduce the size of my prod-db without restoring and switching over?
(Update: I've tried gc-storage but it didn't reduce the size of the test-db significantly)#2022-08-2310:30roltdid you properly reclaim the space on the db side too ? This was my issue last time, the default reclaim behaviour was not adapted to datomic usage and we had to use a more aggressive one#2022-08-2313:26jaret@U02F0C62TC1 is correct, @U4GEXTNGZ The most likely cause for disk size disparity pre and post restore is garbage. Datomic backups do not include garbage and reflect actual disk size.#2022-08-2313:30timook thanks. that helps. any idea which word I need to shout at the oracle-admin for him to know which direction to go?#2022-08-2315:11favilaIs it the same issue as here? https://clojurians.slack.com/archives/C03RZMDSH/p1655301426331359{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-08-2315:14favilaThere’s also gc-deleted-dbs. If the storage contains datomic databases that were deleted at the datomic level (d/delete-database), the segments from that database are not actually deleted from storage without an extra operation. https://docs.datomic.com/on-prem/operation/capacity.html#garbage-collection-deleted-production#2022-08-2516:35babardoHello we would like to whitelist ips able to connect to our Datomic cloud https://blog.datomic.com/2018/02/access-control-in-datomic-cloud.html. We can do this by updating the security group associated with Datomic bastion.
• My question is, when we upgrade Datomic stack, will changes made on bastion SG will be reset?
• If yes, if there a correct way to do that?#2022-08-2517:59Daniel JompheWhen you upgrade, there will no more be any bastion. Datomic Cloud got simplified quite a lot by the upgrade that came out near the summer of 2021. I suppose you're asking from a stack deployed before that, and never upgraded since then... I suggest you take a look at the release notes.{:tag :div, :attrs {:class "message-reaction", :title "pray"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙏")} " 1")}
#2022-08-2519:03babardoOh thanks, this is true indeed, I need to look at new releases#2022-08-2519:16Daniel JompheThis might be useful
https://forum.datomic.com/t/experience-report-updating-from-solo-to-datomic-cloud-884-9095/1913{:tag :div, :attrs {:class "message-reaction", :title "heart"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("❤️")} " 1")}
#2022-08-2517:072FOGood day,
I'm looking for learning resources (blogs, vids, demo repos) that demonstrate data modeling in datalog (any and all flavors).
These are the resources I've used so far:
• Domain Modeling with Datalog
https://youtube.com/watch?v=oo-7mN9WXTw
• Prototyping with Clojure
https://github.com/aliaksandr-s/prototyping-with-clojure/blob/master/tutorial/chapter-04/04-Data%20modeling.md
• Declarative Domain Modeling for Datomic Ion/Cloud
https://youtu.be/EDojA_fahvM?t=704
I found these resources to focus more on datalog itself (syntax, queries) and or the DB's api
• Datomic tutorial
https://docs.datomic.com/cloud/tutorial/client.html
• XTDB space adventure
https://nextjournal.com/xtdb-tutorial
• Learn datalog
http://www.learndatalogtoday.org/{:tag :div, :attrs {:class "message-reaction", :title "star-struck"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 2")}
#2022-08-2609:12nandoI assume you have found http://www.learndatalogtoday.org/ ?{:tag :div, :attrs {:class "message-reaction", :title "pray"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙏")} " 1")}
#2022-08-2619:132FOthanks, yep that was my intro to datalog, its very good but I didn't find much there wrt domain modeling.
I edited the question to indicate the resources I've used#2022-08-2609:05nandoTo confirm what I think I understand from the docs, a transaction to update 2 different entities is possible and would look like this. What I’m doing here is essentially marking the batch item as complete or fulfilled and subtracting the batch weight from the stock.
(defn save-batch-item
[m]
(d/transact conn {:tx-data [{:db/id (:id m)
:batch-item/weight (:weight m)
:batch-item/complete? (:complete? m)}]
[{:db/id (:nid m)
:nutrient/grams-in-stock (:new-grams-in-stock m)
}]
}))#2022-08-2609:16favilaYes. Transaction data is a list of commands of the form [op arg ,,,]. Maps are just syntax sugar for the equivalent [:db/add entity attr value]. Transaction functions are just commands that expand to other commands in the transactor. Expansion continues until a set point of only primitive :db/add or :db/retract commands are left.#2022-08-2609:16favilaGiven this model, there are no meaningful entity “boundaries” to transactions. It’s just commands.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-08-2610:49nando@U09R86PA4 👍 Thanks for the confirmation!#2022-08-2615:11Antoine Zimmermannhello, i'm looking into datomic using postgres storage for my next project and i noticed that the version of the jdbc driver mentioned in the docs is quite old, does it works still use the latest version?
Edit: it works locally using md5 pwd encryption with pg:latest and driver 42.5.0
Edit 2: using a PG14 self managed DB (scaleway) it works using datomic-pro-1.0.6397, openjdk 8 and pg driver 42.5.0#2022-08-3117:28Drew VerleeIs there a function which will turn the tx-data map and nested map forms into the list forms? e.g
;; have
[{:db/id 316659348816869
:internal-team/id #uuid "5caf6e45-f54d-4c0e-9658-33f63c069569"}]
;; want
[[:db/add 316659348816869 :internal-team/id #uuid "5caf6e45-f54d-4c0e-9658-33f63c069569"]]
I want this so it's easier to change the db/id's if there is another simple way to do that i would be keen to hear it 🙂#2022-08-3117:29Drew Verleethe result of (:tx-data (d/with)) will do this but i can't use that bc it throws.#2022-08-3117:40ghadiWhat do you mean by “change the db/id’s”?#2022-08-3118:18Drew VerleeThe ids are too long for datascript, so I'll need to replace them with shorter ones.#2022-08-3121:28Drew VerleeAt the moment i'm changing directions and just looking to handle both the map and list form of that transaction can take. here is the map form handling code:
(defn server-tx->client-tx!
([server-tx]
(server-tx->client-tx! server-tx (atom {:client-eid-idx 0
:client-eid 0
:server-eid->client-eid {}})))
([server-tx client-indexing-state]
(letfn [(server-eid->client-eid!
[server-eid]
(:client-eid
(swap! client-indexing-state
(fn [{:keys [client-eid server-eid->client-eid]}]
(let [id (server-eid->client-eid server-eid (dec client-eid))]
{:client-eid-idx client-eid
:client-eid id
:server-eid->client-eid (assoc server-eid->client-eid server-eid id)})))))]
(->> server-tx
(postwalk
#(if (and (map? %) (:db/id %))
(update % :db/id server-eid->client-eid!)
%))))))
(tests
"base case"
(server-tx->client-tx! []) := []
"turns ids into negative numbers which datascript sees as temp"
(server-tx->client-tx! [{:db/id 1}]) := [{:db/id -1}]
"correctly syncs across reused ids in a transaction"
(server-tx->client-tx! [{:db/id 1} {:db/id 1}]) := [{:db/id -1} {:db/id -1}]
(server-tx->client-tx! [{:db/id 1} {:db/id 2}]) := [{:db/id -1} {:db/id -2}]
"it works on nested hashmaps"
(server-tx->client-tx! [{:a [{:db/id 1}]}]) := [{:a [{:db/id -1}]}]
(server-tx->client-tx! [{:a [{:db/id 1}] :b [{:db/id 1}]}]) := [{:a [{:db/id -1}] :b [{:db/id -1}]}]
"note its not just turning the given id negative its decrementing from 0"
(server-tx->client-tx! [{:db/id 100}]) := [{:db/id -1}]
nil)#2022-08-3121:36Drew Verleeill probably have to move the atom out of that function and into our re-frame db so we can reference it when doing future tx... or maybe it could return it along with the transactions. ugh.#2022-08-3120:51hadilsHi! I am trying to remove all dependencies to Datomic Cloud in my repository. I have removed all traces of the libraries in my deps.edn however, when I try to remove:
:mvn/repos {"datomic-cloud" {:url ""}}
I get the following error message:
The following errors were found during project resolve: /Users/hadilsabbagh/yardwerkz/deps.edn: Could not find artifact com.datomic:ion:jar:0.9.28 in central ()
I cannot find com.datomic/ion 0.9.28 referred to anywhere in my deps.edn or in the repository. Any suggestions would be greatly appreciated!#2022-08-3121:19hanDerPederI'd check if it's listed in -Stree. clj reads multiple deps.edn files#2022-08-3121:21hanDerPederMy first thought was it probably has something to do with the cache, but reading https://clojure.org/reference/deps_and_cli i think it would be invalidated when you changed deps.edn.#2022-08-3121:23hanDerPederAlso, probably more appropriate to ask this in #tools-deps.#2022-08-3121:23hadilsThank you! Found it in clj -Stree#2022-08-3121:23Daniel JompheOur Datomic Cloud Ion system needs to track events to send (typical) user-level notifications.
Notifications have different churn characteristics than most of the DB model datoms.
We don’t expect to need to query them more than one year in the past.
We fear that our main DB might grow too fast.
We thought of creating a separate DB per year of notifications (e.g. notif-2021, notif-2022) but since https://forum.datomic.com/t/joining-across-databases-in-datomic-cloud/1632, this wouldn’t be practical wouldn't it?
What are good fit or bad fit solution examples with Datomic Cloud Ions?#2022-09-0103:28Brett RowberryI’m on an application team using PostgreSQL. We’re considering Datomic Cloud. There is an analytics team that wants us to stream all our data to them via https://aws.amazon.com/kinesis/data-firehose/. I’m imagining there’s a way to hook up the https://blog.datomic.com/2013/10/the-transaction-report-queue.html to it somehow. Maybe I could write an Ion subscribing with the Datomic client that knows how to publish to Firehose?#2022-09-0111:52Daniel JompheDatomic Cloud's Client API doesn't support the Peer API's txReportQueue.
You might want to install a transaction function that pushes each transaction to a queue as a side-effect.
C.f. https://forum.datomic.com/t/tx-report-queue/316/4{:tag :div, :attrs {:class "message-reaction", :title "thanks2"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "thanks2", :src "https://emoji.slack-edge.com/T03RZGPFR/thanks2/50a7eae6ac7040b9.gif"}, :content nil})} " 1")}
#2022-09-0113:32Brett RowberryLet's pretend I know nothing about Datomic, except for marketing and some YouTube videos (which is true 😆). What does it mean to install a transaction function ?#2022-09-0113:34Brett RowberryI guess that's what this is about? https://docs.datomic.com/cloud/transactions/transaction-functions.html#custom#2022-09-0113:37Daniel JompheYes, so it's app-level code that you deploy for the transactor instead of for the user-level-ion-app.
Ion can deploy code to:
• Query groups
• Primary group (incl. transactor)
#2022-09-0113:38Brett RowberryMakes sense. Thanks!#2022-09-0113:39Daniel JompheYou're welcome.
Honestly, when discussing these things, we're in the darker corners of Datomic experience.
It's much easier to get answers for Datomic On-Prem than for Cloud.
OTOH, given its constraints, it seems it might be harder to shoot ourselves in the foot with Cloud than it is with On-Prem's more flexible design.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-09-0111:09jumarI'm struggling to make a query to list "recipes" in my datomic db which are either public or private.
I tried to use or as in
'[:find (pull ?e pattern)
:in $ pattern account-id
;; Or clause:
:where (or [?e :recipe/public? true]
(and [?owner :account/account-id account-id]
[?e :recipe/owner ?owner]
[?e :recipe/public? false]))]
.. but that's failing with:
Caused by: java.lang.AssertionError: Assert failed: All clauses in 'or' must use same set of vars, had [#{?e} #{?e} #{?owner ?e}]
(apply = uvs)
Then I tried or-join:
'[:find (pull ?e pattern)
:in $ pattern account-id
;; Or clause:
:where (or-join [?e]
[?e :recipe/public? true]
(and [?owner :account/account-id account-id]
[?e :recipe/owner ?owner]
[?e :recipe/public? false]))]
but that doesn't return what I want - it seems to only return recipes that are public.
Before, I had those as two separate queries and it worked well:
;; public recipes
[:find (pull ?e pattern)
:in $ pattern
:where [?e :recipe/public? true]]
;; private recipes
'[:find (pull ?e pattern)
:in $ ?account-id pattern
:where
[?owner :account/account-id ?account-id]
[?e :recipe/owner ?owner]
[?e :recipe/public? false]]
#2022-09-0111:10jumarHere's the whole schema - it's from the Learn Pedestal course: https://github.com/jacekschae/learn-pedestal-course-files/blob/main/increments/19-load-dataset/src/resources/cheffy#2022-09-0112:02jumarOk, silly mistake - it looks like my version is actually working and it's just a higher-level logic that transforms this into something that makes a test fail.#2022-09-0112:02thumbnailGlad you figured it out, i take it that the or-join one is correct 😁? #2022-09-0112:20jumarWell, it actually is a bit weird.
pull requires me to use an argument name without question mark like pattern.
But or-join requires me to use one with question mark it seems like ?account-id.
So the correct version of my query is:
`[:find (pull ?e [*])
:in $ pattern account-id
;; Or clauses:
;; `or-join` is needed because the second clause (`and`) uses a different set of variables
:where (or-join [?e ?account-id]
[?e :recipe/public? true]
(and [?e :recipe/public? false]
[?owner :account/account-id ?account-id]
[?e :recipe/owner ?owner]))]
I also had to specify both ?e and ?account-id in the list that comes as the first arg of or-join.#2022-09-0112:21jumarFor plain account-id I get this error:
Caused by: java.lang.RuntimeException: Unable to resolve symbol: account-id in this context
#2022-09-0112:23jumarAh, no, it simply doesn't work.
I forgot to replace one occurence.
With ?account-id I get this error:
... 81 more
Caused by: java.lang.Exception: Unable to find data source: $__in__3 in: ($ pattern $__in__3)
using this query:
[:find (pull ?e pattern)
:in $ pattern ?account-id
:where (or-join [?e ?account-id] ; it's the same error whether I specify ?account-id here or not
[?e :recipe/public? true]
(and [?e :recipe/public? false]
[?owner :account/account-id ?account-id]
[?e :recipe/owner ?owner]))]
#2022-09-0112:42jumarOk, I found the real problem - sometimes, this ?account-id input arg can be nil.
It's this condition that makes it fail with such a weird error.
So I can include this clause only when ?account-id is not nil:
[?owner :account/account-id ?account-id]
#2022-09-0112:42pppauli'm not really sure why you have to make a complicated query for this#2022-09-0112:42jumarWhat's the simpler version?#2022-09-0112:42pppaulwhy can't you just focus on the public/private query alone?#2022-09-0112:43pppaulno or-join, no and#2022-09-0112:43jumarI need a single query for reasons.
So I'm looking for the way to get both results from a single query,
instead of having two queries (which actually are longer than a single one)#2022-09-0112:44jumarFor now, it's seems it's the ?account-id parameter that makes it fail when it's nil.
If there's a simpler way, I'm happy to learn it 🙂.#2022-09-0112:45pppauli feel like doing or-join is probably making things way too complicated#2022-09-0112:45jumar"or" condition doesn't sound to me like something unusual.
But I'm Datomic noob and mostly familiar with SQL databases.#2022-09-0112:46pppaulor is done via rules#2022-09-0112:46pppaulbut there is also the or statement#2022-09-0112:46jumarThat's what I tried and it doesn't work - see https://clojurians.slack.com/archives/C03RZMDSH/p1662030547421749#2022-09-0112:47pppaulor-join is very special case, i'm not sure you really need it here, but i've never run into a situation where i have used it#2022-09-0112:48jumarAgain, I tried or and it doesn't work:
Caused by: java.lang.AssertionError: Assert failed: All clauses in 'or' must use same set of vars, had [#{?e} #{?e} #{?owner ?e}]
(apply = uvs)
It's because I need ?owner in the second set of rules.#2022-09-0112:48pppaul(and [?e :recipe/public? false]
[?owner :account/account-id ?account-id]
[?e :recipe/owner ?owner]))
i think you may make a rule for this, and i'm not sure the ?owner is doing anything#2022-09-0112:49pppaulbut i guess you want all recipes, and all owner recipes#2022-09-0112:49jumarIt should return only recipes whose owner is the account-id specified as the input argument#2022-09-0112:50pppaulif the owner has a relation to their recipes, you can get that without a query, just a pull#2022-09-0112:50jumaryes, the result should be: all public recipes + all recipes owned by given account.#2022-09-0112:50pppauli feel like you are better off with a pull#2022-09-0112:51jumarI'm using pull, so I'm not sure I understand.
I would appreciate a full query if you can provide that.#2022-09-0112:51pppaulyou can pull the related recipes from the owner#2022-09-0112:52pppaulso, query all public, (1 line query), and pull the owner recipes#2022-09-0112:52pppaul(d/pull db '[{:recipe/_owner [*]}] owner-id)#2022-09-0112:53jumarBut I want a single query.
I'm trying to pass my query to an interceptor which executes it and that doesn't know anything about the query.
Interceptor uses datomic.api/query function to get the results and pass it further.#2022-09-0112:53pppauli do almost everything in datomic via pulls, very rarely do i make queries, and they are usually doing stuff that is crazy (history stuff)#2022-09-0112:54pppaulyou can have a pull as part of a query#2022-09-0112:54jumarAgain, I use it already and this is what I have now:
(da/query
{:query '[:find (pull ?e pattern)
:in $ pattern ?account-id
:where (or-join [?e]
[?e :recipe/public? true]
(and [?e :recipe/public? false]
[?owner :account/account-id ?account-id]
[?e :recipe/owner ?owner]))]
:args [db recipe-pattern account-id]})
#2022-09-0112:54pppaulyou gotta use pulls when dealing with direct relations like this, it's just much simpler#2022-09-0112:55jumarI have complicated recipe-pattern already: https://github.com/jacekschae/learn-pedestal-course-files/blob/main/increments/43-list-conversations/src/main/cheffy/recipes.clj#L9-L28
Not sure if that can easily be combined with what I need.#2022-09-0112:55pppaulyou don't need all this complicated stuff in the where, just query all public recipies, and pull related ones on the owner#2022-09-0112:56pppaulthat's not a complicated pull#2022-09-0112:56pppaulyou can add recipe/_owner in that pattern#2022-09-0112:57pppauli would recommend aliasing names too#2022-09-0112:57pppaul'(:recipe/_owner :as :owner) something like that#2022-09-0113:00pppaulthe pull has to operate on the account-id, though#2022-09-0113:00pppauli don't think you can do 2 pulls in a query#2022-09-0113:00jumarSorry, I simply don't understand how to do this filtering with pull syntax.
I need to get the union of two sets:
• All the public recipes - returning all the information as specified by my recipe-pattern
• All the private recipes owned by given account-id (if any)#2022-09-0113:02pppaulstep 2 = (d/pull db '[{:recipe/_owner [*]}] owner-id)#2022-09-0113:02pppaulstep 1 = [?e :recipe/public? true] where clause (only 1 clause)#2022-09-0113:03jumarThat's a separate action that I do not want to perform because the actor executing the query doesn't know anything about it.
In my case, the client would pass this:
{:query '[:find (pull ?e pattern)
:in $ pattern ?account-id
:where (or-join [?e]
[?e :recipe/public? true]
(and [?e :recipe/public? false]
[?owner :account/account-id ?account-id]
[?e :recipe/owner ?owner]))]
:args [db recipe-pattern account-id]})
and the executor would take it and pass to datomic.api/query - it has no clue about some additional pull that needs to be executed.#2022-09-0113:04pppaulhmmm#2022-09-0113:04pppaulcan you add a callback?#2022-09-0113:05jumarMaybe. But I feel like I'm making it more complicated just because of the desire to use pull 🙂#2022-09-0113:05pppaulcan you do this in 2 steps, where one is the query and another is just a pull (which can be done in a query as well)#2022-09-0113:06jumarMaybe I should think about allowing a sequence of queries, regardless whether this is the best approach here or not.
Thanks for the ideas.
I'll see what I can do.#2022-09-0113:06pppaulpull is really good, though. you can get away with almost never using queries if you use pull#2022-09-0113:07pppaulthe client can ask for all public recipes, and all owner recipes, you get 2 very tiny queries, 1 line each.#2022-09-0113:08pppauli have a feeling it'll be faster too, cus who knows what datomic is trying to do with that or-join stuff#2022-09-0113:10pppaulalso, you probably want to do something special with the all recipes query (filter, sort, limit) and adding that stuff to an already complicated query is probably going to cause issues#2022-09-0115:42jumarDoes Datomic have any support for java.time package?
It seems that I have to use an instance of java.util.Date as a value of instant attribute.
I didn't find much about this topic, just an older discussion: https://forum.datomic.com/t/java-time/1406#2022-09-0116:59magnarsUnfortunately Datomic doesn't let you extend the types it supports, but I have been using "Datomic Type Extensions" via https://github.com/magnars/datomic-type-extensions with this java.time-package https://github.com/magnars/java-time-dte for several years in different projects now. It is slightly leaky, but certainly useable.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-09-0215:32pppaulcould a db/function handle something like auto-coercion?#2022-09-0508:45thumbnailDoes http://my.datomic.com support maven-metadata.xml on package level? I’m getting a 404 at the expected url:
wget --http-user=<user> --http-password=<download-key>
#2022-09-0520:22Alex Miller (Clojure team)I don't think so#2022-09-0606:42thumbnailreason im asking, Im trying to setup renovate / dependabot / lein ancient which requires this file to determine if the current version of the dependency.
But I suppose its not supported 😁#2022-09-0618:08Ben GrabowI am considering the suitability of Datomic Cloud for my org, and I am faced with the challenge of managing data ownership of many entities and attributes. We have several teams, each of which has their own domain of system ownership, and it makes sense to me that our system would somehow enforce the ownership of attributes such that only certain teams or processes are allowed to write to those attributes. I think what I'm looking for is essentially an ACL (access control list).
Is this a good idea?
What approaches are available for implementing this?
Ideally I would implement this in the database itself, rather than in code that wraps the transact function, so that I could say definitively that the system behaves this way without a way to get around it.{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 1")}
#2022-09-0618:22JohnJunless you really want to write a lot of code, postgres gives you all of this for free#2022-09-0619:14Ben GrabowPostgres is off the table for now for other reasons. I'm most interested in how others have tackled this problem in Datomic Cloud (or in Datomic On-Prem if there's a significant difference).#2022-09-0620:21JohnJmaybe others know better but I can see only two options, use custom transaction functions (what you are asking for)(https://docs.datomic.com/cloud/transactions/transaction-functions.html#custom) which can affect performance heavily or write your own abstraction at the application layer.#2022-09-0620:23Ben GrabowI suppose a restatement of my problem in more general terms is: "How do you keep track of what part of your system is doing what, when you have a large system and a large organization managing that system, and each part of the system has unconstrained write access to a shared datomic instance?"#2022-09-0620:24Ben GrabowWith datomic on-prem I would have guessed the answer is "have a separate datomic instance for each domain of ownership. Constrain writes to a single instance at a time, and join reads across instances" but my understanding is this "join reads across instances" is not supported by Datomic Cloud.#2022-09-0620:24JohnJsounds like what you need is event sourcing#2022-09-0620:25Ben GrabowI should clarify that the problem is not just about tracking what happened in the past, but about establishing certainty around what the system will do in the future.#2022-09-0620:26Ben GrabowI don't think event sourcing really addresses either the historical or the forward-looking aspects of this problem.#2022-09-0620:28JohnJyou changed the original goal then came back to the original goal, don't know#2022-09-0620:33JohnJanyway, datomic is a dumb database, in the sense that the smarts are mostly done using something else or writing the code in the app layer#2022-09-0622:35jarrodctaylor@UANMXF34G Keeping track of vs enforcing can be a pretty large gap. For the former you can https://docs.datomic.com/cloud/transactions/transaction-processing.html#reified-transactions to capture arbitrary information. I have seen this used to store a variety of data about who performed a transaction (user|system|etc) as well as why a transaction was made. A Datomic cloud system can also support connections to multiple databases if that better addresses your problem.#2022-09-0905:55Kris CThis presentation might contain the info you need: https://youtu.be/7lm3K8zVOdY#2022-09-1618:46Ben Grabow@U0508JRJC Thanks for your response! It's news to me that Datomic Cloud can support connections to multiple DBs. Can you help me locate more information about that? The info I had found indicated that Datomic Cloud queries could only run against one database. https://forum.datomic.com/t/joining-across-databases-in-datomic-cloud/1632/2#2022-09-1618:52jarrodctaylorYou can for sure def connections to multiple DBs and query each as needed. You cannot join across multiple DBs.#2022-09-1619:03Ben GrabowHmm, yes that would be a significant downgrade in power for what I have in mind.#2022-09-0707:49plexushttps://twitter.com/plexus/status/1567418723668992000{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 7")}
#2022-09-0816:17Drew VerleeHow is a datomic transaction id different from an entity id?
I assume a transaction id is used as part of a transaction, and can be temp id or an existing id. But is there a different range for transaction ids and entity ids?#2022-09-0816:21Drew VerleeWell temp-ids can be strings.#2022-09-0816:24Drew Verleeah, the transaction id probably refers to tx in the datomic data model https://docs.datomic.com/cloud/whatis/data-model.html#2022-09-0816:26Drew Verleeoh yea, i remember now. The tx is necessary because we keep the entity history.#2022-09-0816:35favilaa datomic transaction id has a datom [tx :db/txInstant V tx true] , and the tx id is in a reserved partition.#2022-09-0816:35favilaotherwise it’s just an entity id, nothing special#2022-09-1407:48seepelAfter using Datomic for the last couple of months I had an idea to implement https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule in the database. I'm wondering how bad an idea is modeling https://en.wikipedia.org/wiki/Hash_consing lists with a schema like this?
[{:db/ident :pair/car
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :pair/cdr
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :pair/car+cdr
:db/valueType :db.type/tuple
:db/tupleAttrs [:pair/car :pair/cdr]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}]#2022-09-1409:51hanDerPederin the docs for datomic.api/entity it says an entity implements clojure.lang.Associative, but when I do (assoc my-entity :foo 1) I get an exception
1. Unhandled java.lang.AbstractMethodError
Receiver class datomic.query.EntityMap does not define or inherit
an implementation of the resolved method 'abstract
clojure.lang.Associative assoc(java.lang.Object, java.lang.Object)'
of interface clojure.lang.Associative.
also
(instance? clojure.lang.Associative my-entity) ;; => true
#2022-09-1409:56hanDerPederlooking at (datafy datomic.query.EntityMap) I see clojure.lang.Associative under :bases but no assoc under members. Am I not understanding what “implements” means?#2022-09-1410:01magnarsIf you are looking to assoc on Datomic entities, here's some code that lets your wrap a Datomic entity - keeping its semantics, while also accepting assoc. I'm not sure it's a good idea tho. https://gist.github.com/magnars/f2fb046904e7c1ea57dca7058266da1d#2022-09-1410:03hanDerPederneat! still curious why it says on the tin that entity implements clojure.lang.Associative while it seems it really does not.#2022-09-1410:26magnarsIn case you're wondering about the clojurescript part of the code, that's for using datascript in the browser. It's been in use for several years, so it's pretty battle tested code.#2022-09-1411:01favilaIt implements the containsKey and entryAt=>MapEntry methods, but not assoc#2022-09-1411:02favilaYou don't have to implement every method to “implement” (in Java instanceof sense) an interface#2022-09-1411:20hanDerPederTIL, thanks :thumbsup:#2022-09-1416:56JohnJdoesn't Java force you though? (excluding default methods)#2022-09-1416:56favilajava the language does, not the JVM/runtime#2022-09-1416:58favilaE.g. you can reify/gen-class/deftype/defrecord in clojure and only partially implement an interface, and clojure will happily generate the class bytecode and the jvm will accept it and just runtime error if you try to call a missing method#2022-09-1416:58favilathe java compiler OTOH will prevent you{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-09-1417:02JohnJThough most of datomic was implemented in Java for performance reasons#2022-09-1417:03favilaeh, I don’t think that’s true#2022-09-1417:03favilaif you look in the jar, it sure looks like mostly AOTed clojure code#2022-09-1417:03favilajudging from class names etc#2022-09-1417:04favilasome key parts are java though, like fressian#2022-09-1417:05favilaAlso there are an awful lot of datomic namespaces#2022-09-1417:05favilaif you just introspect at runtime after requiring datomic.api#2022-09-1417:15JohnJah neat, would be interesting to know how many lines of clojure make up datomic#2022-09-1621:45pppaulyou can walk your entity and run (into {}) at each level where appropriate, or just run it at top level without the walk, then you get your normal map instead of a datomic ent (maybe sometimes you want this, but you also get this with a pull, which is probably better than walking an entity)#2022-09-1419:27hanDerPederis this transaction a no-op or will the history include this?
[[:db/retract 123 :foo/bar 1]
[:db/add 123 :foo/bar 1]]
I could check myself, but haven’t learned how to yet#2022-09-1419:45magnarsIt will fail with
Two datoms in the same transaction conflict
#2022-09-1419:50hanDerPederthat’s disappointing. thanks though 🙂#2022-09-1420:05ghadiWhy is that disappointing? It’s a non sensical transaction#2022-09-1421:03hanDerPederDisappointing in the sense that I need to apply myself.#2022-09-1421:47ghadiah 🙂#2022-09-1515:32pieterbreedI would like to add an ElastiCache cluster for use with an app running as an ion. It seems as if EC can only be used within a single VPC; Is there someone else that's successfully added an EC cluster to an ion? Is it OK to add the cluster to the VPC that the ion is running in, or must I resort to tricks such as VPC peering or something else?#2022-09-1602:11onetomaccording to https://docs.datomic.com/cloud/query/query-index-pull.html
> :limit - Maximum total number of results to return. Specify -1 for no limit. Defaults to 1000.
but our experience is, that index-pull does NOT limit the number of returned entities, but behaves, as if :limit would be -1.
com.datomic/ion {:mvn/version "1.0.59"}
com.datomic/client-cloud {:mvn/version "1.0.120"}
is this a bug in the documentation or a bug in the implementation?{:tag :div, :attrs {:class "message-reaction", :title "face_with_raised_eyebrow"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 1")}
#2022-09-1620:27kennyHuh, interesting. Fwiw, here's a repro:
(def c (d/client {:server-type :dev-local :system "a" :storage-dir :mem}))
(d/create-database c {:db-name "a"})
(def conn (d/connect c {:db-name "a"}))
(d/transact conn {:tx-data (map (fn [n]
{:db/doc (str "n" n)})
(range 2000))})
(def rs (d/index-pull (d/db conn) {:index :avet
:start [:db/doc]
:selector [:db/doc]}))
(count rs)
=> 2039{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-09-1610:06LoicHi, I am developing a web-app using Figwheel for the cljs for hot reloading and a clojure backend with Aleph and datomic-free.
Adding datomic-free or dev-local to my deps.edn makes my deps.edn + figwheel-main REPL fails in vscode (calva) with the following error:
clojure -Sdeps '{:deps {nrepl/nrepl {:mvn/version,"1.0.0"},cider/cider-nrepl {:mvn/version,"0.28.5"},cider/piggieback {:mvn/version,"0.5.3"}}}' -M -m nrepl.cmdline --middleware "[cider.nrepl/cider-middleware cider.piggieback/wrap-cljs-repl]"
Execution error at nrepl.cmdline/require-and-resolve (cmdline.clj:221).
No namespace: cider.piggieback found
I think I saw someone having the same error but no fix were found
the deps.edn deps:
{:deps {;; front-end
com.bhauman/figwheel-main {:mvn/version "0.2.18"}
org.clojure/clojurescript {:mvn/version "1.11.60"}
reagent/reagent {:mvn/version "1.1.1"}
cljsjs/react {:mvn/version "18.2.0-0"}
cljsjs/react-dom {:mvn/version "18.2.0-0"}
markdown-to-hiccup/markdown-to-hiccup {:mvn/version "0.6.2"}
cljs-ajax/cljs-ajax {:mvn/version "0.8.4"}
;; both fron-end and back-end
metosin/malli {:mvn/version "0.8.9"}
metosin/reitit {:mvn/version "0.5.18"}
metosin/muuntaja {:mvn/version "0.6.8"}
;; back-end
com.datomic/datomic-free {:mvn/version "0.9.5697"}
aleph/aleph {:mvn/version "0.5.0"}
mount/mount {:mvn/version "0.1.16"}
com.cognitect/transit-clj {:mvn/version "1.0.329"}}}
Any idea why?
Can it be related to the following warning still present in the last version of datomic-free:
WARNING: requiring-resolve already refers to: #'clojure.core/requiring-resolve in namespace: datomic.common, being replaced by: #'datomic.common/requiring-resolve
-----------#2022-09-1909:45pezVery strange. Is there a way you can share the project or a version of it that exposes the error? I can have a look and see if I can figure out a way to make it work.#2022-09-2000:48LoicSure, here is the branch I am working on: https://github.com/skydread1/flybot.sg/blob/42-simple-clojure-backend/deps.edn
dev alias for figwheel and dev-server alias for the backend.
The current workaround is to start the server on port 8123 and start figweel on port 9500 and use cross-origin headers for the request to the server.
Ideally, the datomic-free (or dev-local) deps should be in the general deps and a handler passed to figwheel so only one server is running for repl dev. (same as for prod).
Thank you!{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2022-09-2006:38pezI've never seen this before. It doesn't make sense at all. I asked about it in #tools-deps: https://clojurians.slack.com/archives/C6QH853H8/p1663655723134009#2022-09-1921:07Dustin Getzhas anyone implemented the entity api for datomic clients under the presumption that it will run in an ion and be fast? or even datafy/nav#2022-09-1921:38seepelI'm working on an app that I plan to deploy to an Ion, and I do have the presumption that it will be fast. I'm curious why you ask, do you expect it to not be fast?#2022-09-2002:10Drew VerleeI think... The Entity api is given an id/key give me the value. Ions in aws run on top of aws dynamodb which is a key value store.
I assume the id lookup would be constant time.
But im probably missing more then half the story. In general, The faster you make it, the more space you have to use and it also helps to know ahead of time you were going to get asked.
Basically, fast answers are ones that are usually almost the question themselves.
(Tiny rant over)#2022-09-2002:19seepelAh, so the network round trip might be too slow to do something like the following?
(map #(db/pull db '[*] %) dbids)
#2022-09-2002:23Drew VerleeIm not sure my understanding of the entity api is strong enough to answer dustins question.
But get id, should be constant time. But the "entity api" might include a lot more.#2022-09-2010:59Dustin GetzYeah i think (d/entity db e) is not much more than (d/datoms db {:index :eavt, :components [e]}) wrapped in datafy/nav – the problem is if the client is remote then each nav is a round trip with no way to batch them, which the ions model solves by colocating the app process/classpath with the index#2022-09-2012:37Drew VerleeDustin, you said "ions model" but did you mean "on prem" ? I thought the ion client wasn't collocated with the index cache anymore. (It's been a while since I thought about the internals)#2022-09-2014:37Daniel JompheDynamoDB isn't pertinent to queries in Datomic Cloud.
It's only used by the transactor to guarantee serialization of the transactions (enforce ACID properties of transactions).#2022-09-2014:38Daniel JompheI might be wrong in some details...
Once a tx passes through DynamoDB, the transactor updates indexes in S3 and EFS, and notifies query groups of the availability of the tx's novelty in e.g. index incremental updates.
Query groups (to serve DB queries, and in which our Ions can be deployed) can then read the novelty in their EFS distributed filesystem, and copy that to the Valcache, a SSD-based cache that plays the role of e.g. Memcache for a single instance server of Datomic Cloud (thus, a hot cache based on this instance's query habits). This SSD-backed Valcache is an optional step for users who pay for a specific category of server instances in their query group.
When our Ions in a query group query the DB, the updated indexes required to serve the query are loaded from Valcache-SSD if available, or EFS otherwise, into the server RAM as the hottest and most performant cache an Ion can use. They will stay there in RAM until they are swapped out by more useful stuff later (based on future queries and size of RAM available).
In this context, Dustin's question seems to be:
Who, relying on an Ion's expected RAM-locality of hot caches, decided to implement either an {:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 2")}
#2022-09-2014:50Dustin Getz"I thought the ion client wasn't collocated with the index cache anymore" I dont know anything about this, this is my first time using cloud#2022-09-2014:51Dustin GetzI am not using onprem – onprem has the real entity api so we'd just use that#2022-09-2014:52Daniel JompheThe Ion is our code added to a Datomic Cloud server instance.
It has direct access to the instance's Java Heap.
An Ion doesn't make requests through the network, except of course when the Datomic Cloud server instance has no local caches and will download them to satisfy a query. This is all transparent to the Ion.#2022-09-2020:12Daniel JompheHi @U1QJACBUM, pinging one of you cognitects, hoping you might bring some of your perspective to the OP's question! 🙏:skin-tone-3:
I'm very interested in using Hyperfiddle in an Ion context, and I know many of us are too.#2022-09-2022:26Dustin Getzhyperfiddle/photon will work (assuming websocket or other transport); my q is just about a demo datomic browser app that i want entity walking for#2022-09-2023:39Joe LaneWhen using hyperfiddle/photon with onprem do you access the server side entity api over a wire from the browser?#2022-09-2023:55Dustin Getzno#2022-09-2115:06Daniel Jomphe@U0CJ19XAM, photon does relays the browser's need for DB entity api navigation, but the DB calls are made strictly from the backend. So nothing special for you to consider about the browser. Just Clojure and Datomic Cloud.
So what is sought by Photon in Datomic Cloud Ions, is "simply" some kind of entity api or datafy/nav in the context of an Ion's clojure process.#2022-09-2115:14Dustin GetzI think it's easy enough to build, and then Onprem and Ion users will get a fast db explorer app, Client (remote) users will get a slower app but it probably won't matter – and our abstraction is streaming, so slow information will just stream in late without slowing down the UX{:tag :div, :attrs {:class "message-reaction", :title "mario-star"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "mario-star", :src "https://emoji.slack-edge.com/T03RZGPFR/mario-star/87ce2f0f0f979e66.jpg"}, :content nil})} " 1")}
#2022-09-2115:15Dustin GetzWhen I said "browser" I meant in the REBL sense#2022-09-2014:38Daniel JompheI might be wrong in some details...
Once a tx passes through DynamoDB, the transactor updates indexes in S3 and EFS, and notifies query groups of the availability of the tx's novelty in e.g. index incremental updates.
Query groups (to serve DB queries, and in which our Ions can be deployed) can then read the novelty in their EFS distributed filesystem, and copy that to the Valcache, a SSD-based cache that plays the role of e.g. Memcache for a single instance server of Datomic Cloud (thus, a hot cache based on this instance's query habits). This SSD-backed Valcache is an optional step for users who pay for a specific category of server instances in their query group.
When our Ions in a query group query the DB, the updated indexes required to serve the query are loaded from Valcache-SSD if available, or EFS otherwise, into the server RAM as the hottest and most performant cache an Ion can use. They will stay there in RAM until they are swapped out by more useful stuff later (based on future queries and size of RAM available).
In this context, Dustin's question seems to be:
Who, relying on an Ion's expected RAM-locality of hot caches, decided to implement either an {:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 2")}
#2022-09-2014:54Drew VerleeThanks for the clarification{:tag :div, :attrs {:class "message-reaction", :title "relaxed"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("☺️")} " 1")}
#2022-09-2015:49uwoI just want to make sure my testing is correct here: You can't use db-stats together with d/as-of to get a picture of earlier datom counts, correct?#2022-09-2113:09souenzzoIt sounds like a bug for me.#2022-09-2117:33uwohmm! thanks, I'll go to the proper bug channels then#2022-09-2119:05favilaI don't think so. db-stats is the stats of the index, and there's only one index at a time#2022-09-2119:06favilaas-of, since, etc are just filters on the index#2022-09-2119:06favilaI don't think db-stats could be implemented efficiently if it filtered too#2022-09-2119:15souenzzoit is a great description for a index-stats function. 👀
The db in the past uses only a subset of the index, so db-stats should respect this subset, imho.#2022-09-2119:41favilabut to do this it has to descend into the "leaves" of the tree instead of just walking the index nodes#2022-09-2119:41favilayou can efficiently give stats for a particular index tier (e.g. all of history, or all of the "now"), but not for subsets of one, which is what as-of/since can do#2022-09-2220:50uwo@U09R86PA4 thanks. Out of curiosity, if you needed to compute the rate of growth of a set of attributes would you use the datoms api or trawl the tx-log? I've implemented my own db-stats using d/datoms that respects as-of+history, but it just takes forever, even though I'm just getting a count for every month the database has been in operation.#2022-09-2220:52uwoI'm guessing that I'm doing a lot of double counting since I'm running the computation for each month. Probably would be more efficient to trawl the tx-log, no?#2022-09-2220:55favilaDepends on exactly what you are measuring and why. If you are just looking at a few attr and are mostly interested in index size or churn or sth like that, I'd d/datoms aevt once over an unfiltered history db and bucket by tx myself. You can compute tx intervals for time using avet of db/txInstant#2022-09-2220:56favilaIf you want everything, or you want to include unindexed info (eg noHistory attr churn) I'd use tx range directly#2022-09-2220:59uwoOh, lordy -- bucket by tx -- I'm embarrassed for not thinking of that! Thanks @U09R86PA4!!!#2022-09-2221:08uwoHmm. I see -- if I'm using the :aevt approach it's gonna need to be only for a few attributes. Gonna tx-range since I need more of a global snapshot of what business entities (attr sets) with the largest datom footprints are growing fastest.#2022-09-2016:00uwothis would have been a nice technique for calculating rate of growth (esp. for certain sets of attributes).#2022-09-2114:56shane👋 hello - I have a question around how best to model a parent-child relationship where a field on the child is unique in the context of the parent.#2022-09-2114:57shaneFor example - I would like a workspace name to be unique within an organization but different organizations can each have a workspace called "My workspace".#2022-09-2114:57shane{:db/ident :organization/workspaces
:db/doc "The workspaces in the organization."
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/many
:db/isComponent true}
; workspace attributes
{:db/ident :workspace/organization+slug+topdown
:db/doc "The composite identifier of the workspace."
:db/unique :db.unique/identity
:db/valueType :db.type/tuple
;; :organization/_workspaces doesn't work as a composite tuple
;; since it seems to be more pull query rather than schema syntax
;; but it feels like a better data model because it is top down
;; and I can use isComponent on the parent and I think it makes
;; for easier pull queries
:db/tupleAttrs [:organization/_workspaces :workspace/slug]
:db/cardinality :db.cardinality/one}
{:db/ident :workspace/organization
:db/doc "The parent organizaiton of the workspaces."
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :workspace/organization+slug+bottomup
:db/doc "The composite identifier of the workspace."
:db/unique :db.unique/identity
:db/valueType :db.type/tuple
;; this works but I need to add a ref going bottom up and either
;; lose the isComponent ref on the parent or have duplicate ref fields
;; on parent and child
:db/tupleAttrs [:workspace/organization :workspace/slug]
:db/cardinality :db.cardinality/one}#2022-09-2114:57shaneIt really feels like I should model this top-down so that I get to use isComponent and have nicer pull queries - but all the examples of composite tuples need to use refs that are on the entity.#2022-09-2114:58shaneI don't think this is a particularly unusual requirement - how should I solve this the datomic way? should I not be trying to solve this in the schema and instead use a pred fn like unique-within-organization that I write myself?#2022-09-2119:01favilaPart of the problem is that datomic does not enforce that isComponent attrs are actually unique. (i.e. that there's always one-and-only-one :organization/_workspaces per workspace)#2022-09-2119:01favilaI would say in general that you should strive to orient the direction of refs so that the cardinality is one or as small as possible#2022-09-2119:02favilaif you keep that principle, then the tuple approach is more natural#2022-09-2119:03favilahowever there are tradeoffs: the edges of a domain are less discoverable, you can't use isComponent + retractEntity to enforce lifetimes (although honestly you quickly outgrow how naive that is), and you can't use index-pull in lots of cases where you would want to.#2022-09-2119:04favilaanother approach is you can use :db/ensure and an entity predicate to enforce the invariant, but you have to know when the application should include that check.#2022-09-2119:05favilathe advantage of that is you can use the shape you want and you don't need to keep another index for a relationship that potentially very rarely changes#2022-09-2119:35shane:thinking_face: interesting - these points echo most of my concerns.
> orient the direction of refs so that the cardinality is one or as small as possible .. then the tuple approach is more natural
this is good to know - I was definitely thinking about this
> retractEntity ... you quickly outgrow how naive that is
I was worried about losing this - but maybe its not that big of deal then
> you can't use index-pull in lots of cases where you would want to
this is what I noticed but I realized I use pathom for the api with lots of batch resolvers and most of my pulls are really just pulling ids. So maybe this won't be an issue in practice.
> you can use :db/ensure and an entity predicate to enforce the invariant
yea I was headed down this path and I probably still need to do this but I was hoping to hand over things like "is unique" to the db.#2022-09-2119:37shaneOk - I think I just need to try something and see how it feels:
1. model bottom-up, so children have a single ref to the parent
2. now I can add my composite tuples to check for uniqueness
3. leverage pathom to hopefully make up for loss of easy pull queries.
4. look into custom transactions to replace isComponent + retractEntity
thanks for the feedback! first time using datomic for anything more than toy projects so this definitely helps!#2022-09-2119:55Dustin GetzIs there a faster way to stream attrs out of Datomic Cloud than (d/q {:query '[:find ?e :where [?e :db/valueType _]] :args [db]}) , clever use of the datoms API for example? Fast in terms of time to first byte#2022-09-2120:06pyryThere definitely could be, as d/q is eager. For instance, d/qseq would typically be better for streaming results.{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2022-09-2120:09Dustin Getzah of course, thank you#2022-09-2120:13pyryCould also try using the AEVT index with the datoms API.#2022-09-2120:14Dustin Getzit would be a fullscan though unless i am missing a clever trick to skip ahead#2022-09-2120:16favilad/index-pull also{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-09-2306:08jasonjcknLatest datomic release has ‘High’ CVEs for h2 database dependency, any solutions to this ? it’s a pretty significant issue to deploying it at my company #2022-09-2306:49jasonjcknapparently i can just upgrade the h2 database dependency to latest version , so long as i’m using postgresql driver, didn’t think it’d be that easy reading through chat logs #2022-09-2310:48favilaNote that the difficulty is that h2 itself is not compatible with its own db files across the major releases. I suspect this is why datomic has not bumped it: suddenly no one would be able to open their existing dev dbs{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-09-2317:20jasonjcknThere’s also some API compatibility/ class errors if you try to use h2database v2.x with a dev/mem instance of datomic too from what i saw, i’m still using v1.x on dev builds #2022-09-2316:50luposlipHey there!
I have a development Transactor (1.0.6397) running in a docker container with the datomic dev protocol. I’ve set the transactor and peer passwords, and set it to allow remote connections.
When the docker container starts, it creates the data folder with the h2 database.
I try to connect from my repl (same datomic version) with connection string ..?password=[the-password], but I just get this error, no matter what I try:
1. Caused by org.h2.jdbc.JdbcSQLException
Wrong user name or password [28000-171]
SessionRemote.java: 568 org.h2.engine.SessionRemote/done
...
JdbcConnection.java: 109 org.h2.jdbc.JdbcConnection/<init>
JdbcConnection.java: 93 org.h2.jdbc.JdbcConnection/<init>
Driver.java: 72 org.h2.Driver/connect
PooledConnection.java: 266
...
sql.clj: 16 datomic.sql/connect
Any help is appreciated!!#2022-09-2320:15Leaf GarlandIt sounds like you're doing things right. To double check, you have set two passwords in the transactor properties?
storage-admin-password=admin
storage-datomic-password=use-this-password
storage-access=remote
And you're using the datomic one in your connection string?
https://docs.datomic.com/on-prem/configuration/configuring-embedded-storage.html#2022-09-2320:34luposlipyeah, I’m doing it exactly the same. The passwords you wrote above, are they default passwords, or just random?
This is from my config file:
storage-admin-password=pwd
storage-datomic-password=pwd
storage-access=remote
#2022-09-2320:49Leaf GarlandJust random passwords. It looks like the connections are working, but you are exposing both the transactor and dev storage ports from your container? e.g. defaults are transactor on 4334 and H2 on 4335 (usually +1 from transactor port).#2022-09-2320:51luposlipAlright.
Yes, I’m starting the container like this:
docker run -p 4334-4336:4334-4336 transactor-dev:latest
#2022-09-2320:53luposlipLooking in the logs, it also seems to start fine, this is currently the last entry:
2022-09-23 20:52:41.642 INFO default datomic.lifecycle - {:tid 25, :username "asdfasdf", :port 4334, :rev 59, :host "0.0.0.0", :pid 17, :event :transactor/heartbeat, :version "1.0.6397", :timestamp 1663966361621, :encrypt-channel true}#2022-09-2321:07luposlipOHH!! It’s working now! Initially I set different passwords. Then I changed to pwd and pwd, and then changed other things (that presumably was wrong).
Then I tried to change the passwords back to being different - and now it works! 😄
So my conclusion is - the 2 admin/datomic passwords have to be different. This could be documented.
Thanks for your time @U02EP7NKPAL!{:tag :div, :attrs {:class "message-reaction", :title "tada"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🎉")} " 1")}
#2022-09-2318:13Dustin GetzIs there ever a reason to prefer q over qseq? seems like qseq is strictly better (more powerful/expressive) in every way (zero loss in expressive power)#2022-09-2514:19pithylessdatomic.api, datomic.client.api, and datomic.client.api.async all seem to have slightly different versions of q and qseq (they all are documented to accept some subset of query-list, query-map, query-string).
One thing I am aware of - from a strict expressiveness view - is the query-map does not support returning a collection or scalar value in the find spec (i.e. :find [?a ...] or :find ?a .)
I do wonder if the internal implementations are different as to have different performance characteristics in greedy queries (e.g. when returning just a scalar or computing aggregates).#2022-10-1916:33onetomim also curious about the answer to the q vs qseq question.
and i also miss the scalar find spec a lot...
makes me wonder that im missing something...
the datalog query would be so nice and declarative, but it#2022-10-1916:36onetomthe lack of these scalar find specs are like a fly in the soup. they are just so useful, so often. especially during interactive repl work.
to remedy the situation, i was considering to write some https://github.com/thunknyc/richelieu advice around q & qseq, which would rewrite the datalog query and do the necessary post-processing on the result.#2022-10-1916:43onetomim already advising d/transact, d/with & d/pull to convert back and forth between java.util.Date & java.time.Instant, using [tick.core :as t]:
(defn maybe-instant->inst [maybe-convertable-to-inst]
(if (or (t/instant? maybe-convertable-to-inst)
(t/zoned-date-time? maybe-convertable-to-inst)
(t/offset-date-time? maybe-convertable-to-inst))
(t/inst maybe-convertable-to-inst)
maybe-convertable-to-inst))
(defadvice ^:private transact-instants
"Replace java.time.Instants with Clojure instants (which are java.util.Date)
before transacting."
[transact conn arg-map]
(-> arg-map
(update :tx-data (partial walk/postwalk maybe-instant->inst))
(->> (transact conn))))
(defonce _transact-instants (advise-var #'d/transact #'transact-instants))
(defonce _with-instants (advise-var #'d/with #'transact-instants))
(defadvice ^:private transact-throw-txd
"Like d/transact, but attaches the tx-arg to its exceptions."
[transact conn arg-map]
(try (transact conn arg-map)
(catch Exception ex
(-> "Transaction failed"
(ex-info arg-map ex)
throw))))
(comment
(advise-var #'d/transact #'transact-throw-txd)
)
(defn- ^:deprecated maybe-inst->instant [i] (if (inst? i) (t/instant i) i))
(defadvice ^:private pull-instants
([pull db arg-map]
(->> (pull db arg-map)
(walk/postwalk maybe-inst->instant)))
([pull db selector eid]
(->> (pull db selector eid)
(walk/postwalk maybe-inst->instant))))
(defonce _pull-instants (advise-var #'d/pull #'pull-instants))#2022-09-2318:19Ben SlessCopyright related question - if I implement a spec of the datomic query and pull API abstract syntax as appears in the official documentation, do I need to do anything regarding licensing, attribution, copyrights assignment, mentions, etc?#2022-09-2421:13hanDerPederI’ve modelled a linked list thing and, given a list element, want to find the head of the list.
Attributes in play:
• list/next ref to next elem in list
• list/head a string placed on the first elem by convention
I’ve made a recursive rule which walks the list backwards until it finds an entity with a list/head attribute
(def list-head-rule
'[[(list-head ?x ?head)
[?head :list/head]
[(= ?x ?head)]]
[(list-head ?x ?head)
[?y :list/next ?x]
[list-head ?y ?head]]])
And call the query as follows
(d/q '[:find [?head ...]
:in $ % ?elem
:where
[list-head ?elem ?head]]
(d/db conn)
list-head-rule
some-ent-in-list)
if some-ent-in-list is the head entity the query takes 2ms. if it’s the 100th it takes 66ms.
I find this surprisingly slow. If I use a recursive pull from the head I get the whole list in ~1ms. I though walking references backwards had the same perf as forward. Am I doing something terrible in the code above?#2022-09-2421:27hanDerPederusing this pull gives me the result I want in ~8ms. Albeit, not pretty
(loop [obj (d/pull (d/db conn) '[* {[:list/_next :as :prev] ...}] some-ent-in-list)]
(if-let [prev (some-> obj :prev first)]
(recur prev)
(dissoc obj :list/next)))#2022-09-2421:48favilaThe first rule variant says [?head :list/head] but head is never bound when the rule is evaluated#2022-09-2421:49favilaSo that realizes all head entities#2022-09-2421:50favilaTry [?x :list/head][(identity ?x) ?head]#2022-09-2421:51favilaIs so critical for perf to know what is bound so you can do clause ordering that it's nearly always unsafe to declare rules without that special “require bound variable” syntax#2022-09-2421:53favilaSo I recommend something like (list-head [?x] ?head) in your definitions#2022-09-2421:54hanDerPederreading up on that now. what query time would you say is to be expected here? roughly same magnitude as the pull or slower?#2022-09-2421:54favilaRoughly same#2022-09-2422:04hanDerPederhmm, using
'[[(list-head [?x] ?head)
[?x :list/head]
[(identity ?x) ?head]]
[(list-head [?x] ?head)
[?y :list/next ?x]
[list-head ?y ?head]]]
and query
(d/q '[:find ?head .
:in $ % ?elem
:where
[?head :list/head]
[list-head ?elem ?head]]
(d/db conn)
list-head-rule
item-99-in-list)
I’m still getting ~66ms#2022-09-2422:07hanDerPederschema definition:
[#:db{:ident :list/head,
:valueType :db.type/string,
:cardinality :db.cardinality/one,
:unique :db.unique/identity}
#:db{:ident :list/next,
:valueType :db.type/ref,
:cardinality :db.cardinality/one}]
#2022-09-2422:10favilaIs this db small enough to fit in memory?#2022-09-2422:10hanDerPederyes, toy example#2022-09-2422:11hanDerPederthough currently using dev storage transactor#2022-09-2422:11favilaDo you see a difference if you def the query body? Perhaps there's some constant overhead#2022-09-2422:14favilaOtherwise idk. There is a tiny difference in index use (pull uses eavt for the first and last lookup; query generally uses aevt if a is static) but that's not a difference I'd expect t matter here#2022-09-2422:35hanDerPederno difference if I def the query body. Here’s a complete example if you want to try: https://gist.github.com/handerpeder/d6db931cd8fceca629ea4b42102d6f05
either way, thanks for the help!#2022-09-2501:46favilaThe gist still has the problem of a too-early [?head :list/head] (line 48, that should just be removed), but that doesn’t matter at this data size.#2022-09-2501:46favilaAFAIKT recursive rules are just really slow…#2022-09-2501:46favilaThis is an equivalent rule:#2022-09-2501:49favila(defn heads [db elem]
;; same thing as the list-head rule,
;; including no assumptions about number of matches
;; and fully evaluating each branch.
(-> #{}
(into (map :e) (d/datoms db :aevt :list/head elem))
(into (mapcat #(heads db (:e %)))
(d/datoms db :vaet elem :list/next))))
(def list-head
['[(list-head [?x] ?head)
[?x :list/head]
[(identity ?x) ?head]]
'[(list-head [?x] ?head)
[?y :list/next ?x]
(list-head ?y ?head)]
;; function call instead to avoid recursion
['(list-head2 [?x] ?head)
[(list `heads '$ '?x) '[?head ...]]]])#2022-09-2501:50favila(time
(d/q '[:find ?head
:in $ % ?value
:where
[?elem :list/value ?value]
(list-head ?elem ?head)]
(d/db conn)
list-head
99))
"Elapsed time: 40.233292 msecs"
=> #{[17592186045418]}
(time
(d/q '[:find ?head
:in $ % ?value
:where
[?elem :list/value ?value]
(list-head2 ?elem ?head)]
(d/db conn)
list-head
99))
"Elapsed time: 1.5925 msecs"
=> #{[17592186045418]}#2022-09-2501:51favilabut the rule is probably stack overflow resistant, my fn is not#2022-09-2521:10hanDerPederThanks!#2022-09-2518:20Dustin Getz(extend-protocol ccp/Datafiable
Datum
(datafy [^Datum [e a v tx op]] [e a v tx op]))
Has anyone done this work already, a contrib library for Datomic and Datomic Cloud to integrate obvious stuff?#2022-10-1917:03onetomi also have a similar utility for testing purposes:
;; FIXME this should be supported more directly by matcher combinators
(defn datum->map [datum]
(into {} (map #(vector % (% datum))) [:e :a :v :tx :added]))#2022-09-2518:38Dustin GetzI'm developing a web-based tool that lets Datomic users (both Cloud and Onprem) browse their database. Is there a standard or contributed set of specs to capture Datomic connection strings and configuration data that feels something like shadow-cljs.edn?#2022-10-1916:58onetomi'd be interested in such a tool too...
i've also started to develop some puget pretty-printing conveniences for datoms and transaction results, but it's quite clunky, because there are differences between the client configs, eg:
(defn basis-t
"Return the basis T of a db-val, regardless of its concrete implementation."
[db-val]
(or (-> db-val :basisT) ;; (d/client {:server-type :dev-local})
(-> db-val :t) ;; (d/client {:server-type :cloud}) or :ion
))#2022-10-1917:01onetomor
(defn database-id
"Return the database ID of a db-val, regardless of its concrete
implementation."
[db-val]
(let [db-uuid-or-name
(or
;; (d/client {:server-type :dev-local :storage-dir :mem})
(-> db-val :id)
;; (d/client {:server-type :cloud}) or :ion
(-> db-val :database-id))]
(try (medley/uuid db-uuid-or-name)
db-uuid-or-name
(catch Exception _ex
;; (d/client {:server-type :dev-local :storage-dir "/some/path"})
;; Such mode of operation doesn't seem to assign a UUID to databases,
;; so we hard-wire some, based on the name of possible databases.
(-> {"db1" "25f0edda-69ae-4f52-b1b0-3c9ce83ac84e"
"db2" "0f2aa132-c388-4ed7-ba92-3a2710e16965"}
(get db-uuid-or-name))))))
which database-id we are using to detect db reconstructions, because we store references to db values at a certain point in time in other datomic dbs...#2022-10-1917:04Dustin Getzhow far would this get you
(extend-protocol ccp/Datafiable
Datum
(datafy [^Datum [e a v tx op]] [e a v tx op]))
#2022-10-1917:04Dustin Getzetc for various internal types and protocols#2022-10-2001:29onetomI still hasn't grasped the essence of datafy & nav, so it hasn't occurred to me to utilize those, but probably quite far.
can u recommend any datafy & nav tutorials, besides https://corfield.org/blog/2018/12/03/datafy-nav/ ?#2022-10-2001:52Dustin Getzi posted a next journal notebook on reddit showing datafy on java io file#2022-10-2001:55Dustin Getzit's quite simple - it's like implementing pr-str on a reference type but instead of printing as a string you're printing as a value#2022-10-2001:59onetomthx, i will dig it up!
im mostly confused by how nav behaves:
(let [x (datafy {:a 1})]
[(nav x :a 1)
(nav x :a 2)
(nav x :b 1)
(nav x :b 3)
(nav x nil 1)
(nav x nil 2)])
#2022-10-2002:00onetomthese all navs all return the same value i passed in, eg. [1 2 1 3 1 2]#2022-10-2002:02onetomthough i should probably do, if i understand nav correctly:
(let [x (datafy {:a 1})]
[(nav x :a (get x :a))
(nav x :a (get x :a))
(nav x :b (get x :b))
(nav x :b (get x :b))
(nav x nil (get x nil))
(nav x nil (get x nil))])#2022-10-2002:02onetomwhich yields [1 1 nil nil nil nil]#2022-10-2002:05onetomfound your article
https://nextjournal.com/dustingetz/datafynav-implementations-for-javaiofile#2022-10-2002:06onetomand the related reddit thread:
https://www.reddit.com/r/Clojure/comments/x1ust9/playing_with_datafynav_on_javaiofile_in_a/#2022-10-2002:19Dustin Getznav is unintuitive bc datafy already gave you the k and v, nav is an opportunity to undo the datafy and return the underlying reference. if there is no underlying reference then nav is just weird identity{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-10-2002:20Dustin Getzyou're meant to thus bounce between datafy and nav like in the last test in that notebook#2022-09-2714:19ennIs there a significant overhead to rule invocation (not sure if that’s the right word)? Would you expect a significant difference in performance between:
(d/q '[:find ?result
:in $ % [?arg ...]
:where (my-rule ?arg ?result)]
db
'[[(my-rule [?arg] ?result)
...]
coll]
and
(d/q '[:find ?result
:in $ % ?args
:where (my-rule ?args ?result)]
db
'[[(my-rule [?args] ?result)
[(identity ?args) [?arg ...]]
...]
coll]
I would expect these to be roughly equivalent but in practice it seems like, when coll has more than a few items, the second style where the collection is “destructured” within the rule body is significantly faster.#2022-10-0415:26ennFollowing up on this and also on the discussion of recursive rules https://clojurians.slack.com/archives/C03RZMDSH/p1664053990766329, I had been imagining that rule overhead was pretty trivial, and not worth worrying much about for the average application.
But testing with a tiny toy dataset, it seems that rule-invocation costs are actually pretty significant, and that it might be worth it to avoid rule composition altogether for performance-sensitive use cases.
I made a https://gist.github.com/enaeher/fab806cf0d919b094a76101093272713 with a tiny amount of data and wrote the same (trivial) query four ways:
1. inline, with no rules
2. with the matching logic extracted into a rule
3. with the matching logic extracted into two rules, one of which invokes the other
4. with the matching logic extracted into two rules, both of which are invoked directly in the :where clause of the query
The inline query is fastest, and the performance seems to increase with the number of rule invocations. #1 is twice as fast as #3 and four times as fast as #4.
Is there something I’m missing or doing wrong?
We’ve just deployed a system that resolves GraphQL queries into Datomic queries, and it makes heavy use of rule composition. It’s been a pleasure to build and work on, but these results have me wondering if I should replace the rules with functions.{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 2")}
#2022-10-0513:33KeithHello @U060QM7AA! The main thing I see causing a difference here is the fact that your rules do not keep the predicate expression close to the clause it’s intended to be used with. Notice how the variants where [(> ?value 90)] is adjacent to [?user :user/level ?level] seem to be the fastest.#2022-10-0415:26ennFollowing up on this and also on the discussion of recursive rules https://clojurians.slack.com/archives/C03RZMDSH/p1664053990766329, I had been imagining that rule overhead was pretty trivial, and not worth worrying much about for the average application.
But testing with a tiny toy dataset, it seems that rule-invocation costs are actually pretty significant, and that it might be worth it to avoid rule composition altogether for performance-sensitive use cases.
I made a https://gist.github.com/enaeher/fab806cf0d919b094a76101093272713 with a tiny amount of data and wrote the same (trivial) query four ways:
1. inline, with no rules
2. with the matching logic extracted into a rule
3. with the matching logic extracted into two rules, one of which invokes the other
4. with the matching logic extracted into two rules, both of which are invoked directly in the :where clause of the query
The inline query is fastest, and the performance seems to increase with the number of rule invocations. #1 is twice as fast as #3 and four times as fast as #4.
Is there something I’m missing or doing wrong?
We’ve just deployed a system that resolves GraphQL queries into Datomic queries, and it makes heavy use of rule composition. It’s been a pleasure to build and work on, but these results have me wondering if I should replace the rules with functions.{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 2")}
#2022-09-2815:47Mark PerrottaAny updates on this issue: https://forum.datomic.com/t/not-possible-to-pass-more-then-one-vector-variable-to-or-join/1591#2022-09-2815:47Mark PerrottaI’m using datomic cloud and it seems to be in issue there as well#2022-09-2902:33Leaf GarlandI'd like to know if there are any guarantees around the ordering of multiple values for a binding in a query. For example in this query there could be 1 or 2 values for ?value and I am assuming that the order of those values matches the order of the rules in the or clause - so that the value of :some/attr1 will always be returned when present, falling back to the value of :some/attr2 when not.
(d/q '[:find ?value .
:in $ ?attr
:where
[?attr-id :db/ident ?attr]
(or
[?attr-id :some/attr1 ?value]
[?attr-id :some/attr2 ?value])]
db :test/attr)#2022-09-2902:38favilaNo, they are run simultaneously, (at least in effect, if not always actually)#2022-09-2902:39Leaf GarlandThanks! My initial tests were working the way I hoped but eventually I found some attributes where the order was different.#2022-09-2902:46Leaf GarlandOut of curiosity are there any Datomic docs or datalog articles I can read to understand that?#2022-09-2902:49favilaDatalog operates on sets, so there are no order guarantees about anything{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-09-2902:50favilaIt’s so foundational an idea that I’m not sure it’s written down. But if it is I would expect it in the query tutorial or reference#2022-09-2903:19Leaf GarlandJust for future reference, I think this is what I was looking for.
(d/q '[:find ?value .
:in $ ?attr
:where
[?attr-id :db/ident ?attr]
(or
(and [?attr-id :some/attr1 ?value]
(not [?attr-id :some/attr2]))
[?attr-id :some/attr2 ?value])]
db :test/attr)#2022-09-2919:09jackmochHey there, does anyone know if there was any response or resolution to this https://forum.datomic.com/t/troubles-with-upsert-on-composite-tuples/1355? My team is running into issues when using composite tuples with a ref as an attribute:
(def example-schema
[{:db/ident :my-first-string-attribute
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
{:db/ident :my-second-string-attribute
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}
{:db/ident :my-ref-attribute
:db/valueType :db.type/ref
:db/cardinality :db.cardinality/one}
{:db/ident :my-keyword-attribute
:db/valueType :db.type/keyword
:db/cardinality :db.cardinality/one}
{:db/ident :composite-tuple/my-string-attribute+my-ref-attribute
:db/valueType :db.type/tuple
:db/tupleAttrs [:my-first-string-attribute :my-ref-attribute]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
{:db/ident :composite-tuple/my-string-attribute+my-keyword-attribute
:db/valueType :db.type/tuple
:db/tupleAttrs [:my-second-string-attribute :my-keyword-attribute]
:db/cardinality :db.cardinality/one
:db/unique :db.unique/identity}
{:db/ident :a-ref}])
(def example-docs
[{:my-first-string-attribute "A String"
:my-ref-attribute :a-ref}
{:my-second-string-attribute "Another String"
:my-keyword-attribute :a-keyword}])
(def db-uri "datomic:")
(d/create-database db-uri)
(def conn (d/connect db-uri))
(d/transact conn example-schema)
(d/transact conn example-docs)
(d/entity (d/db conn)
[:composite-tuple/my-string-attribute+my-ref-attribute ["A String" :a-ref]])
;=> nil
(d/entity (d/db conn)
[:composite-tuple/my-string-attribute+my-keyword-attribute ["Another String" :a-keyword]])
;=> #:db{:id 17592186045432}#2022-10-0111:06dazldjust in case you hadn’t tried it:
(d/entity (d/db conn)
[:composite-tuple/my-string-attribute+my-ref-attribute ["A String" (:db/id (d/entity (d/db conn) :a-ref))]])
=> #:db{:id 17592186045419}
when looking up the entity by a tuple id, it seems that datomic doesn’t try to turn values inside the tuple into an entity via lookup, even if we’ve described that position in the tuple as being a ref.
That seems like a relatively easy bug to fix to me.#2022-10-0111:08dazldI have no idea what the lookup code actually looks like, of course, but if the identity is a tuple, then walking the data provided as a value for that identity should be straightforward.#2022-10-0111:08dazldcc @U07FCNURX#2022-10-2318:03Casey@jackmoch
Did you find a solution for this? Hitting the same problem with composite tuples + ref attrs here.
Also it seems one can't update an entity by only asserting the individual ref attrs, one must also assert the composite attr which contradicts the documentation.#2022-10-2511:54jackmoch@U70QFSCG2 We ended up pushing the entity resolution as far down the stack as possible so the callers can pass around the tuple and it gets resolved just before the query, pull, etc is executed
It definitely didn’t feel like the cleanest resolution so we also got rid of one usage pattern completely and are mainly using them for enforcing uniqueness on composite attrs
I did hear back from the Datomic team and they’re aware and considering options but I’m not sure what that looks like in practice.
I also did see a mention of lookup refs in the newest datomic changelog but I haven’t tried the new version yet to see if it impacts any of this behavior #2022-10-2718:59onetomWe also wish for smarter lookup refs, to simplify composite key situations.
I still have the feeling, that I might be doing something wrong and that's why I'm in need of such a feature, but not sure how to simplify our data model... 😕#2022-10-2719:04CaseyFwiw we're moving from composite keys, which would otherwise naturally arise in the data model, to synthetic keys (uuids) and manual checks in the pre-transact code to enforce the unique composite relationship. It's not as pretty or elegant but results in more straightforward and readable code.#2022-10-2719:39magnarsWe had to resort to {:my-entity/composite-id (pr-str [id-1 id-2])} to get upsert, instead of using tuples.#2022-10-2811:52onetomi also ended up writing some transactor functions to achieve upsert behaviour, when i was using composite keys. really feels like im doing something wrong...
we are dealing with 3rd-party data, which is partitioned by external users and tenants/orgs too, so most of our entities have composite key with
1. a ref attribute to the 3rd-party org entity
2. a 3rd-party ID, which is unique within the 3rd-party org, but not across orgs necessarily
i would think, that this must be a pretty common scenario, so im a bit surprised how clunky is it at the end to work with it.
would be pretty nice though, if the lookup refs would be more nestable...#2022-09-2919:10magnarsWe had the same issue with refs in tuples, and had to find some other solution for it. Would be interested to hear if there is any progress on this.{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 2")}
#2022-10-2318:03Casey@jackmoch
Did you find a solution for this? Hitting the same problem with composite tuples + ref attrs here.
Also it seems one can't update an entity by only asserting the individual ref attrs, one must also assert the composite attr which contradicts the documentation.#2022-09-3012:43robert-stuttafordfyi Cognitect: https://channel9.msdn.com/posts/Rich-Hickey-The-Database-as-a-Value link on https://docs.datomic.com/on-prem/learning/videos.html is broken, it now redirects to https://learn.microsoft.com/en-gb/shows/{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2022-09-3012:44robert-stuttafordworking link https://www.youtube.com/watch?v=EKdV1IgAaFc#2022-09-3017:08flowthingSo I imagine dev-local doesn’t support fulltext? I get Unable to resolve symbol: fulltext in this context when I try to use it.#2022-10-0512:02ivanaHello! I need an ability to have a setting, should I use Datomic or not. And if yes, then I make all the queries & transactions with my connection & db, but if not I want to have nils on any query & transaction without altering and even connecting to any db. Of course I may wrap all my code used Datomic api into (when ...) but there are so many places of it. Maybe there is a way of setting empty or blank connection parameters? Nil or {} fhrows an exceptions, maybe there is a way?#2022-10-0512:12souenzzoWhere the nils are created?#2022-10-0512:14ivanaFor now - nowhere. It is just my wish & dream of it, and I asked if it possible#2022-10-0514:20pyryPerhaps redefine the relevant functions from Datomic API with something like (alter-var-root! #'datomic.api/q ...)#2022-10-0514:27pyryOf course, what's practical depends on the specifics of your problem. I imagine the above would work fairly well if there's one global setting to toggle, whereas it probably won't be as good if you have eg. a toggle per user.#2022-10-0514:27ivanaThanks, looks like hack but if we have not other way, maybe it is the one#2022-10-0514:37pyryWell, I suppose a hack is exactly what you need if you don't want to refactor your callsites. 😅#2022-10-0514:41ivanaYep, you are right and thanks for it, but I hoped that Datomic may have that feature on its own side, i.e. (d/transact nil [.......]) => nil etc 🙂#2022-10-0609:24Christian JohansenIt seems like I can look up attributes in an as-of database before they are created - is this expected behavior?
(d/create-database "datomic:")
(def conn (d/connect "datomic:"))
(d/transact conn [{:db/ident :my/attr
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}])
(def db (d/db conn))
(def then (java.util.Date.))
(d/transact conn [{:db/ident :my/attr-2
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one}])
;; 1. Look up in the old db
(d/entity db :my/attr-2) ;;=> nil
;; 2. Look up in an equivalent as-of db
(d/entity (d/as-of (d/db conn) then) :my/attr-2) ;;=> {:db/id 74}
Using d/touch on the entity from the second example shows no data, but it surprised me that I was able to look up the eid from the attribute id at all in a database where it shouldn’t exist.{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2022-10-0803:38favilaIdents are “a-temporal” when resolved via d/entid (implicitly or explicitly). They are placed in a special db index/cache of elements which are indexed without considering retraction. This is what allows you to rename an ident and have the old one still work. It also allows code to reference the new name at times before it was created without erroring out#2022-10-0803:40favilaIf you query using “normal” index lookup eg [?e :db/ident :some-value] you will get what you expect, but it will be slower and won’t give you the special behavior through renames#2022-10-0805:20Christian JohansenI see, thanks 👍#2022-10-0914:34Dustin GetzThis is a Datomic web explorer using the client API, tested here on Datomic Cloud. is it useful? if we released this on github, what would you want to use it for? would you contribute features to it?{:tag :div, :attrs {:class "message-reaction", :title "raised_hands"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙌")} " 2")}
#2022-10-0914:34Dustin GetzMore info – it is implemented in Photon, our network-transparent and reactive dialect of Clojure/Script (implemented as a macro like core.async). The explorer abstraction binds to server-side datafy/nav so you can attach it to any server side data structure, for example the https://gist.github.com/dustingetz/dd67a35d818e3a1bf6733147cf5cdea7#2022-10-0914:40Dustin GetzTo bring this to parity with the https://docs.datomic.com/on-prem/other-tools/console.html (onprem) or https://github.com/tatut/xtdb-inspector or https://docs.datomic.com/cloud/other-tools/REBL.html is basically a tutorial exercise at this point. With added benefit of being web-first and maximally extensible#2022-10-1114:37Daniel JompheWow! Perf is incredible, considering the scroller implies pagination that coordinates backend-frontend-DB nav on each scroll action, right?
No promises, but here are our team's first thoughts in response:
1. We would use it to explore the heck out of our DB schemas and data.
2. We would use it not only to show, but also sometimes to edit things to debug our app when we transacted the wrong stuff and need to correct it.
3. We would use it to see new data trickle into the DB while we develop new schemas and features (and this would help us do 2. above less often).
4. We might add a https://babashka.org/scittle/codemirror.html to run arbitrary queries https://github.com/bsless/datalog-parser.spec and explore the results in this rich UI.
5. We might hook 4. above up to a filesystem storage of past queries, so that we might reuse the same queries over different DB replicas representing different environments (dev, staging, prod).
6. We might contribute back the general improvements we code.
{:tag :div, :attrs {:class "message-reaction", :title "memo"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("📝")} " 1")}
#2022-10-1120:11Dustin GetzIndeed the scroll is efficient, server streamed etc thanks for pointing that out{:tag :div, :attrs {:class "message-reaction", :title "raised_hands"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙌")} " 1")}
#2022-10-1214:05jjttjjSeems like some major spam happening at https://ask.datomic.com/#2022-10-1214:06Robert A. RandolphFixed, thank you for notifying us!{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-10-1214:06Joe Lane@audiolabs ^^#2022-10-1214:06Robert A. RandolphFixed, thank you for notifying us!{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-10-1315:26Pascale AudetHello ! We have a particular use case and have a mandate to stick with Datomic as much as possible. We are starting to plan a lot of programmability into the system, so that end users can dynamically define entity types, attributes, etc. In other words, add their own schemas. Our first intuition is not to let them create actual schema attributes, but to represent them as data in our domain, not in Datomic's schema domain (we thought of Thing→Data from Reddit or some RDF variant). Of course, by doing this, we will lose the power of Datalog on regular schemas. We believe that we cannot use the typical vertical table (entity id, attribute and value) to store the entity data. So we thought of using tuples (entity id, attributes and values) and zip the attributes/values of the tuples (in the same table or separated, review the diagrams). We will deal with limiting the length of the tuples.
If you have experience with this type of design and its trade-offs, we'd love to hear it too.#2022-10-1316:06robert-stuttafordall in one multi-tenant database?#2022-10-1316:07robert-stuttafordyou have a ceiling on idents, 32k items if i recall{:tag :div, :attrs {:class "message-reaction", :title "memo"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("📝")} " 1")}
#2022-10-1316:41dvingoI was working on an application a couple months ago that had a similar use-case of allowing users to define custom forms - collection of form types. We decided to not allow users to create schema programmatically. In the end I think we agreed the system design would have been a lot simpler to allow them to actually transact schema attributes though. So I would recommend thinking through a system design that allows that. Namespaced keywords make this sort of thing tractable{:tag :div, :attrs {:class "message-reaction", :title "memo"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("📝")} " 1")}
#2022-10-1317:25Dustin GetzDo the user schemas overlap or do they naturally shard per tenant? Idea: one database per user tenant#2022-10-1321:48steveb8nWe have done this at Nextdoc. I can’t share all the IP but can show you the base storage design which simplifies how you store attributes.{:tag :div, :attrs {:class "message-reaction", :title "memo"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("📝")} " 1")}
#2022-10-1321:49steveb8nthere’s another set of meta data that maps customer entities to these entities and a layer of CRUD fns to abstract it all. that’s too much to share but the underlying storage can point you in a direction that works#2022-10-1321:50steveb8nwe use Lacinia and it makes the abstraction of customer entities easier too#2022-10-1322:01favilaThe approach I’ve taken is to create attributes for each value type, allow users to create attributes as data entities referencing one of them, and model “assertions” as entities with an attribute ref plus value ref. E.g.
;; schema attributes to support user-attribute models
[{:db/ident :user-defined-attribute/name
:db/cardinality :db.cardinality/one
:db/valueType :db.type/string
:db/unique :db.unique/value}
{:db/ident :user-defined-attribute/valueType
:db/cardinality :db.cardinality/one
:db/valueType :db.type/ref}
{:db/ident :user-data/value-string
:db/cardinality :db.cardinality/one
:db/valueType :db.type/string}
{:db/ident :user-data/value-strings
:db/cardinality :db.cardinality/many
:db/valueType :db.type/string}
,,,
]
;; User attribute definition
[{:user-defined-attribute/name "my-attribute"
:user-defined-attribute/valueType :user-data/value-string ,,,}]
;; User attribute "assertion"
[{:user-data/attribute [:user-defined-attribute/name "my-attribute"]
:user-data/value-string "my-value"
,,,}]
(d/q '[:find ?data-e ?data-attr ?data-val
:where
[?data-e :user-data/attribute ?data-attr]
[?data-attr :user-defined-attribute/valueType ?datomic-a]
[?data-e ?datomic-a ?data-val]
])
#2022-10-1322:03favilaIt’s also possible to write a pull expression + xform that will “lift” an entity which represents a “user data” into a single map entry#2022-10-1322:12favilae.g.
(defn lift-user-data [elem]
{(-> elem :user-data/attribute :user-defined-attribute/name)
(get elem (-> elem :user-data/attribute :user-defined-attribute/valueType :db/ident))})
(pull db [{(:entity/user-data-element :xform 'lift-user-data)
[{:user-data/attribute [:user-defined-attribute/name
{:user-defined-attribute/valueType [:db/ident]}]}
:user-data/value-string
:user-data/value-strings
]
}]
e)#2022-10-1405:17tatutas you can use multiple databases in queries, wouldn’t the approach of having a “tenant custom db” for each tenant be good? having a metamodel on top of the actual model seems cumbersome#2022-10-1405:18tatutlike having 1 shared main database that has all the common things and then each tenant would have a separate custom db for their attrs#2022-10-1408:37octahedrionI don't think Datomic Cloud supports multiple databases in queries, but OnPrem might{:tag :div, :attrs {:class "message-reaction", :title "memo"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("📝")} " 1")}
#2022-10-1409:17robert-stuttafordon-prem does yes#2022-10-1410:18favilausing multiple dbs isn’t that great operationally unless you have a fixed number and they are smallish{:tag :div, :attrs {:class "message-reaction", :title "memo"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("📝")} " 1")}
#2022-10-1410:22favilaConnecting is slow, so they need to be connected all the time practically speaking. If they all share a transactor, a need for indexing on one blocks indexing on the others. It’s hard to predict object cache utilization. Maybe it makes sense sometimes. However I think both schema and databases are meant to be provisioned and manipulated by devs not users (i.e. carefully and thoughtfully){:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2022-10-1410:33robert-stuttafordyes, all dbs in use need to fit their roots into peer memory#2022-10-1410:39tatutgood to know#2022-10-1410:45Pascale AudetHi @U0509NKGK (@U09K620SG and @U11SJ6Q0K too) , ideally, everything in one database, about 90% of the data will be shared with other tenants. But I may be wrong in my reasoning.
And thanks for the idents, I didn't know that, I'll add it in our note.#2022-10-1410:48Pascale Audet@U0CKDHF4L, we are on Datomic Cloud#2022-10-1410:50Pascale Audet@U051V5LLP, thanks for your experience!#2022-10-1410:50Pascale Audet@U0510KXTU what happens if someone needs a 15th field?#2022-10-1410:56Pascale Audet@U09R86PA4, can you tell us how big your table is at this point? How long does it take to search on a user defined attribute?#2022-10-1411:00favilaWe have a 17 billion datom db, but these user attributes are a fraction of that, tens of millions of entities at most, tens of thousands of tenants. You’ll have to be more specific about what you mean by “search”--the operations we do usually start from a user attribute (unique per tenant) or a thing which has an attribute on it. Both of these are fast enough that we don’t notice.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-10-1411:02favilaas you can see it’s an extra join or two onto likely-to-be-loaded (low-cardinality, very shared) entities.#2022-10-1411:08Pascale AudetThanks for the details! You've also responded to the "search" question.#2022-10-1416:48Pascale AudetI can see that a vertical table would work for our use case. However, I think most of you are on Datomic On-Prem? Do you think it would be the same on Datomic Cloud?#2022-10-1421:18steveb8nWe just add more attributes as needed. You will always have some limits to # of columns or you can be DOS’d or DOW’d so having limits in this dimension is consistent with that{:tag :div, :attrs {:class "message-reaction", :title "memo"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("📝")} " 1")}
#2022-10-1421:27favilayeah, even in your datomic schema cardinality limits are a good idea (i.e. cardinality-many rarely really means “infinity”)#2022-10-1416:48Pascale AudetI can see that a vertical table would work for our use case. However, I think most of you are on Datomic On-Prem? Do you think it would be the same on Datomic Cloud?#2022-10-1711:17Kris CHi, my company kindly gave me some time to work on an open-source project of my choosing, and since I am a big fan of Clojure and Datomic (on-prem), I chose to build a "different" Datomic console (currently in alpha state).
The stack I used is Luminus for the back-end and Vue.js for the front-end.
Here's a video that should give you a pretty good idea of where I am heading with it...
GitHub link: https://github.com/digiverse/datomic-qbuilder
Looking for feedback from Datomic users, would you find such a console useful?{:tag :div, :attrs {:class "message-reaction", :title "zap"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("⚡")} " 4")}
#2022-10-1711:39tatutA similar thread and question from Dustin on this channel a while back#2022-10-1711:40tatutI think inspectors (especially web based ones) are useful for two things: dev time use for browsing through nested data and in production for ops people#2022-10-1711:42Kris C@U11SJ6Q0K yes, I've seen the one from Dustin, but this is a different take, imho...#2022-10-1711:51Kris C(more emphasis on query/results, explorer is just a small widget)#2022-10-1711:51tatutis it meant to be a standalone app you run or can you incorporate it as a library to an existing app#2022-10-1711:52Kris Ccurrently standalone...#2022-10-1803:22steveb8nnice job. looks really clean.{:tag :div, :attrs {:class "message-reaction", :title "pray"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙏")} " 1")}
#2022-10-1803:23steveb8nlots of overlap these days in tools like this e.g. Photon, Data Rabbit, Metabase etc. could you add a section to the readme stating how you are 1/ same as those tools and 2/ different from those tools#2022-10-1803:24steveb8nI think this would help us decide if the direction matches our needs#2022-10-1803:25steveb8nalso, any plans to make it work with cloud?#2022-10-1807:15Kris C@U0510KXTU I guess you are right, I thought the differences would be apparent from watching the video, but I should really describe them in words... So far I have no plans for the cloud, but if I would see interest, I guess everything is possible... Thanks a lot for your feedback! 🙏#2022-10-1807:20Kris CAlso, this console is meant to be used as a tool for learning and using Datomic, for non-programmers...#2022-10-1809:38steveb8nGood to know. That's a good way to start the readme, who is the target audience#2022-10-1815:04jaret@U013100GJ14 This is extremely cool. @U09K620SG’s thing earlier is also EXTREMELY EXTREMELY cool. I think both of these efforts while different are super interesting. If either of you are up to meeting with me I would love to try to capture problems you encountered while making these tools and "wish lists" of what you needed while building these tools.{:tag :div, :attrs {:class "message-reaction", :title "pray"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙏")} " 1")}
#2022-10-1817:12Dustin GetzNice work @U013100GJ14!{:tag :div, :attrs {:class "message-reaction", :title "pray"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙏")} " 1")}
#2022-10-2509:48Kris CThe Datomic QBuilder console now supports sorting of results.{:tag :div, :attrs {:class "message-reaction", :title "clap"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👏")} " 1")}
#2022-10-1714:52jackmochHey folks, is there a way to reference the :db/txInstant timestamp of a transaction entity from within the transaction? I.e. if I’m transacting an entity and wanted to mirror the :db/txInstant timestamp on my domain entity, is there a way to express that in a single transaction?#2022-10-1715:03favilaUsing public interfaces you can only explicitly assert a txInstant value for the tx and have a matching domain one.#2022-10-1715:04favilathe tx will fail if the txInstant value is less than one that already exists#2022-10-1715:04favilaIt may make sense for some domains to reference the tx from the entity instead#2022-10-1715:05favilae.g. [:db/add e :entity/last-updating-tx "datomic.tx"]#2022-10-1715:14jackmochGotcha gotcha, yeah my use-case is basically a user-override where a domain entity would get a ts attr that mirrors :db/txInstant by default but also allow them to pass a timestamp if they choose
Obviously I don’t want user want to touch datomic’s tx entity so I was but I was hoping I could point to the ts value instead of the entire tx entity so the domain entity could store it as a timestamp instead of a ref, but it should be pretty trivial to do within a query.
Just wanted to make sure I wasn’t missing something in datomic’s docs/API. Thanks!#2022-10-1723:58steveb8nQ: Does NuBank also own the full Datomic business now? I ask because I think it would affect the long term commitment to Datomic Cloud. Curious to hear thoughts on this in the thread….#2022-10-1723:58steveb8nSince NuBank only uses on-prem, I wonder if there is a strong reason for them to maintain Cloud? This matters to me as a Cloud/Ions customer#2022-10-1723:59steveb8nWould be really good to have some assurances about long term viability of Cloud, ideally similar to commitment to Clojure itself#2022-10-1800:00steveb8nfwiw: not casting doubt here. just scratching a mental itch that is probably valuable for others as well#2022-10-1802:13jaretSteve, Cognitect is owned by Nubank and we are a Nubank company. That's all true. I will ask the team what the appropriate wording is here or if we have made an official announcement with the appropriate legal terms but I can 100% tell you that Cloud and On-prem are not going anywhere! In fact, we have some really exciting things coming in Cloud that you're going to love.{:tag :div, :attrs {:class "message-reaction", :title "heart_eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😍")} " 2")}
#2022-10-1802:14jaretI know it's been awhile since our last Cloud release and even since our last on-prem release, but I think that's a reflection of a larger team size working on more ambitious features. And I am super happy to say that publicly 🙂{:tag :div, :attrs {:class "message-reaction", :title "relaxed"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("☺️")} " 1")}
#2022-10-1802:16steveb8nI really like hearing this! thanks @U1QJACBUM looking forward to the future releases#2022-10-1802:16jaretI'll circle back here if there is a more appropriate press release or legal verbiage that we announced that would provide you and the community additional assurances. I know if you're having this thought likely others are and we aren't communicating well enough. That's definitely on me and my team to do a beter job of.{:tag :div, :attrs {:class "message-reaction", :title "thanks3"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "thanks3", :src "https://emoji.slack-edge.com/T03RZGPFR/thanks3/868be8344387d7f0.gif"}, :content nil})} " 2")}
#2022-10-1802:16steveb8nbut would also appreciate a more official/vetted response to the original question too{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-10-2509:48Kris CThe Datomic QBuilder console now supports sorting of results.{:tag :div, :attrs {:class "message-reaction", :title "clap"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👏")} " 1")}
#2022-10-1811:09BenjaminJo I'm implementing a :http-direct function using ring and there is an issue where ring.middleware.cookies returns a lazy seq (the "Set-Cookie" header value is a seq), but cognitect.http_endpoint.jetty expects a string - class clojure.lang.LazySeq cannot be cast to class java.lang.String
["cognitect.http_endpoint.jetty$respond_bbuf_STAR_",
"invokeStatic", "jetty.clj", 342]}],
do you have advice on how to fix? I can of course coerce the seq into a string before returning in my handler#2022-10-1811:10Benjamin(defn- set-cookies
"Add a Set-Cookie header to a response if there is a :cookies key."
[response encoder]
(if-let [cookies (:cookies response)]
(update-in response
[:headers "Set-Cookie"]
concat
(doall (write-cookies cookies encoder)))
response))#2022-10-1811:14BenjaminI wonder if the author of cookies.clj wanted to say "apply str" instead of "concat"#2022-10-2412:21pieterbreedThis is an issue that is ion specific; you'll find that library works well enough in all the other ring servers. I just looked up our middleware for cookies on ion:
Here is ours:
(defn wrap-http-direct [handler]
(fn [req]
(let [resp (handler req)]
(if (contains? (:headers resp) "Set-Cookie")
(update-in resp [:headers "Set-Cookie"] (fn [c]
(cond
(string? c) c
(seq c) (str/join "," c)
:else nil)))
resp))))
I think this library can give you the same thing (but packaged as a lib...)
;; [net.icbink.expand-headers.core :as ion-cookies]
;; from net.icbink/expand_headers {:git/url ""
;; :sha "b2b0364422d71b8f233148942618b7b16da38ecf"}#2022-10-2412:41Benjaminyea thanks I will copy this.#2022-10-1814:23Henrik SuzukiHi! Has anyone here any experience of not being able to connect to Datomic again after querying after too much data?#2022-10-1815:02jaretOn-prem? Cloud? What error do you get? Your transactor still running? Did the peer OOM?#2022-10-1816:34kipzHello. I'm trying to use with (as per https://docs.datomic.com/client-api/datomic.client.api.async.html#var-with) with :db/retractEntity, and the call never returns - just times out. Is this supposed to work? It does seem to work when I add attributes, so am just wondering if this is just not possible, or something else is going on here. Thanks in advance.#2022-10-1911:18kipzFWIW - it does work mostly now. We think we have a dodgy query node 😬 Sorry for the noise.#2022-10-1818:27Drew VerleeHow do i use an aws role that has all the permissions to ensure an on-prem datomic transactor? https://forum.datomic.com/t/how-to-pass-ensure-transactor-fn-an-aws-role/2142.#2022-10-1908:30IMhi,
was wondering are there any cases/caveats where a relatively fast query would take excessively long time? In other words, are there any deadlock-ish/blocking behavior when doing d/q with on-prem in-process peer (using d/history too)?#2022-10-1908:56Ivar RefsdalIs this on-prem or cloud?
I (and others) have been hit by network issues, i.e. silently broken TCP connections. The default behaviour of e.g. the postgresql driver is to wait forever for a packet. For postgresql you may set the socketTimeout property on the connection string. That might help if that is what is hitting you.
The problem is well described here: https://github.com/brettwooldridge/HikariCP/wiki/Rapid-Recovery
(Datomic on prem uses a different connection pool though, but the socketTimeout setting is the same.)#2022-10-1908:57Ivar Refsdaloh I see you are using on-prem#2022-10-1908:58Ivar RefsdalI wrote about this issue here:
https://ask.datomic.com/index.php/631/blocking-event-cluster-without-timeout-failover-semantics#2022-10-1909:05Ivar RefsdalI also wonder if you are running in the "cloud"?
My company services is running in Azure, and we have had plenty of these silently broken TCP connections. All of our code needs to have proper network timeouts, otherwise -well- requests will just block forever it seems. Waiting forever is the default of quite a few libraries (including datomic).
We did not have this problem on prem (though that doesn't mean it cannot happen there of course), or at least it hardly ever happened.#2022-10-1909:34IMYeah, I've seen that ask.datomic thread while looking around, but not sure how much that applies, considering dynamodb backend is used in my case, not even sure if it supports some timeout parameter.#2022-10-1908:31IMalso unrelated question, is there any documentation about what the metrics from Datomic Metrics Reporter (`datomic.process-monitor`) mean exactly and how to interpret them? Things like PodGetMsec or AcceptIndexMsec? I could only find about transactor metrics using registerd callback#2022-10-1912:17roltI remember reading somewhere that undocumented metrics were considered internal{:tag :div, :attrs {:class "message-reaction", :title "unamused"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😒")} " 1")}
#2022-10-1909:48emil0rFor restoring backups to a SQL storage, how do you format the jdbc-url, given that the connection already has a ? in the string you send in as a db-uri#2022-10-1909:50emil0rAlso, is there any limitations in which authentication you need for datomic towards postgres? The installation I am using uses scram-sha-256 as the authn method. Is there need to downgrade to md5?#2022-10-1910:06emil0rOK. Figured it out. You need an md5 based authentication on the postgres instance, and you escape the ? and & characters in your JDBC url with \#2022-10-1917:53Dustin GetzClient API – is it true that the client API is the same across all Datomic product lines? (Onprem, Cloud, Ion, Dev local) the docs for onprem and cloud both link to the same place https://docs.datomic.com/client-api/index.html{:tag :div, :attrs {:class "message-reaction", :title "yes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ({:tag :img, :attrs {:alt "yes", :src "https://emoji.slack-edge.com/T03RZGPFR/yes/7db5d0ba8bc231d1.png"}, :content nil})} " 1")}
#2022-10-1918:06favilaThere are minor differences, e.g. you can’t create/delete dbs (or perform administer-system?) with peer-server (= onprem){:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2022-10-1918:07favilabut these are all the same fns from the same namespace#2022-10-1918:07favilaand all the usual stuff works the same#2022-10-1918:33Pascale Audetoh, that is cool#2022-10-1918:51JohnJIn cloud, is the transactor and peer one process?#2022-10-1918:54JohnJ@U09K620SG haven't try it, but I don't think devlocal supports custom transaction functions since that requires ions#2022-10-1918:57faviladev-local is a single process---seems like the transaction function would only need to be available to the classloader#2022-10-1919:13JohnJOk, is not clear from the docs how you would make them available to devlocal/transactions since the examples rely on the ions api https://docs.datomic.com/cloud/transactions/transaction-processing.html#cancel#2022-10-1919:14JohnJbuy maybe datomic.ion is available in devlocal, haven't check#2022-10-1919:14favilaIn on-prem, nothing is done to make them available--just make sure the var is in the classpath.#2022-10-1919:14favilaI suspect dev-local is the same, because it’s the same process model#2022-10-1919:15favilaions are different, because you have to deploy the txfn to the transacting process somehow.#2022-10-1919:17favilaThis question can be answered very easily in a repl with dev-local loaded. I don’t have a dev-local project handy though.#2022-10-1922:23JohnJlast time I checked there was no datomic.ion namespace per (ns-all)#2022-10-1922:23favilawhy do you need an ion namespace to write a transaction function?#2022-10-1922:24JohnJfor using stuff like cancel https://docs.datomic.com/cloud/transactions/transaction-processing.html#cancel#2022-10-1922:25favilaoh, you mean what environment is available to the txfn#2022-10-1922:25favilaI misunderstood what you meant by this:#2022-10-1922:25favila> but I don’t think devlocal supports custom transaction functions since that requires ions#2022-10-1922:25favilaIt’s easy enough to add a shim for cancel#2022-10-1922:26favilacancel is just (throw (ex-info ..))#2022-10-1922:26favilawith some standard keys in the ex-data map#2022-10-1922:30JohnJthx, will look into it, don't have the project near by right now#2022-10-2012:28emil0rHow does one specify a transactor when using a postgres backed storage for Datomic pro? I get an exception error saying that I am trying to connect to localhost on the transactor, but the transactor is running on another machine. Looking over the documentation I cannot find anything about that for SQL#2022-10-2012:46Kris Csee the sample configuration in file DATOMIC_ROOT/config/samples/sql-transactor-template.properties#2022-10-2012:50Kris CRelevant configuration:
protocol=sql
host=POSTGRES_SQL_HOST
port=POSTGRES_SQL_PORT
sql-url=jdbc:
sql-user=YOUR_DATABASE_USER
sql-password=YOUR_DATABASE_PASS
sql-driver-class=org.postgresql.Driver
#2022-10-2108:05emil0rSolved the problem. For future reference in the event that someone else runs across the same problem I’ll detail it here.
1. I had a postgres storage backed datomic database running on one server. Both the transactor and the postgres RDBMS was running on the same server.
2. I had to switch to a public IP later on in the process when I wanted to access it from another machine.
3. Doing so broke the transactor, but since it was by now running as a service, I didn’t see that until much later when I started looking at the logs
4. Trying to connect to datomic from my peer didn’t work
a. It helpfully threw an exception, telling my that it could not access the transactor on the host “localhost”
b. What the exception did not tell me was that it also tried the server that the RDBMS was running on as a transactor host as well. This sent me on a wild goose chase, trying to find the error on the peer side, when it instead could not connect to the transactor at all, including the correct transactor address
c. The documentation is very unclear on how you connect to the transactor when you have a SQL backed storage, and does not at all tell you that it uses the same address for transactor as storage by re-using the storage address as a transactor address.
i. I might be missing in the documentation how you could specify a transactor, but from what I could read on the connect function in the documentation for datomic.api, everything is baked in to the connect function
5. Finally looking at the logs of the transactor, once I had made sure I could connect to the storage, I saw that the transactor now was not connecting to the storage. Running everything manually showed any faults real quick, and allowed for easy debugging of the issue
Hope it helps someone 🙂#2022-10-2108:09emil0rThis is for on-prem#2022-10-2017:25JohnJWhat I'm missing here? trying to use a peer classpath function in a transaction:
(defn create-movie [db e title genre release-year]
[[:db/add e :movie/title title]
[:db/add e :movie/genre genre]
[:db/add e :movie/release-year release-year]])
(d/transact conn [[bar/create-movie "foo" "The Goonies" "action/adventure" 1985]])
#object[datomic.promise$settable_future$reify__7837 0x1d654b5f {:status :failed, :val #error {
:cause "Cannot write
#2022-10-2017:37favilaquoting?#2022-10-2017:38favila(d/transact conn [['bar/create-movie "foo" "The Goonies" "action/adventure" 1985]]) ?#2022-10-2017:38favila(assuming bar is the full namespace not an alias)#2022-10-2017:44JohnJyes bar is the full ns and I"m running transact from bar too, quoting gives :cause "Could not locate bar__init.class, bar.clj or bar.cljc on classpath."#2022-10-2017:44JohnJFWIW the docs here don't use quoting https://docs.datomic.com/on-prem/reference/database-functions.html#using-transaction-functions#2022-10-2017:46favilaYou’re transmitting a symbol to the transactor, not a function object (regardless of what the docs look like)#2022-10-2017:46favilaso is this on-prem? does the transactor have bar.clj in its classpath?#2022-10-2017:47localshredMissing {:tx-data [[...]]}#2022-10-2017:49JohnJoh it needs to be available to the transactor classpath? the docs make it look like you can setup set them up with the peer only if you want, like you can set them in the transactor or peer#2022-10-2017:51JohnJ@U5RFD1733 the peer lib differ from clients in syntax{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-10-2017:58JohnJhttps://docs.datomic.com/on-prem/reference/database-functions.html#classpath-functions#2022-10-2018:12favila> To add a classpath function for use by transactors, set the DATOMIC_EXT_CLASSPATH environment variable before launching the transactor, e.g. if you added your code in mylibs/mylib.jar:#2022-10-2018:15JohnJyeah but the whole thing makes it sound like you can set them in the peer only too, maybe I'm misreading A classpath function is an ordinary Clojure function added to the classpath of a Datomic peer or transactor. To add a classpath function for use by peers, use your ordinary classpath-building tools, e.g. tools.deps, leiningen, or maven.#2022-10-2018:23favilaIn the peer it’s just normal code. You need it in the peer to do e.g. d/with on a tx that uses that fn#2022-10-2018:24favilathe contrast here is between funs installed into the db, which are addressed by keywords/idents, and fns addressed by symbol, which you just make sure are in the environment of whatever will execute them.#2022-10-2018:27JohnJthx, do you know if the env var only receives jars and does it have to be compiled?#2022-10-2018:27favilaIt’s just adding it into the classpath, so it can be anything java/clojure can accept#2022-10-2018:28favilae.g. you don’t have to AOT anything#2022-10-2018:28favilathis is literally just (require 'the-txfn-symbol-I-saw) at the end of the day#2022-10-2018:31JohnJok, was wondering if it has to be a jar or just a dir can do since it's not clear how DATOMIC_EXT_CLASSPATH extends the CP#2022-10-2018:31favilait’s all in bash--it’s just an append#2022-10-2018:31favilaor a prepend, don’t remember which#2022-10-2018:32JohnJok, just normal java classpath stuff thx#2022-10-2018:59JohnJis 'Database functions' the old method in case you needed to use java?#2022-10-2019:00favila‘database functions’ is older, but the difference is just where the code is stored. Is it stored-procedure-like, or code-like?#2022-10-2019:07JohnJI see, so classpath functions are not really stored procedures#2022-10-2019:07favilaright they are just accessible in the environment, which you can change. They’re not versioned with schema.{:tag :div, :attrs {:class "message-reaction", :title "ok_hand"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👌")} " 1")}
#2022-10-2019:08favilano transaction or other data change makes them available#2022-10-2018:25Dustin GetzWhat is the fastest way to do a substring check in a :where clause? essentially clojure.string/includes? or re-matches#2022-10-2018:26favila[(.contains ^String ?s "substr")]#2022-10-2018:26Dustin Getzand this is a fullscan of all datoms under consideration?#2022-10-2018:26favila?s must be bound already#2022-10-2018:26favilapredicates like this can only reduce the result set#2022-10-2018:28Dustin GetzHow bad is this naive query then, are you saying it's not so bad?
(d/q '[:find [?e ...]
:in $ ?needle :where
[?e :order/email ?email]
[(user.util/includes-str? ?email ?needle)]]
db (or ?email ""))
(defn includes-str? [v needle]
(clojure.string/includes? (clojure.string/lower-case (str v))
(clojure.string/lower-case (str needle))))#2022-10-2018:31favilaIt’s scanning every order-email assertion, yes, but that’s from [?e :order/email ?email] not the predicate#2022-10-2018:33Dustin GetzIn this case would it be generally encouraged to slug the string to lowercase at transaction time to avoid the computation in the loop, or is stuff like this generally considered idiomatic then#2022-10-2018:33favilabtw if you want a case-insensitive match I recommend using something which doesn’t force new string allocations#2022-10-2018:34Dustin Getzre-matches ?#2022-10-2018:34Dustin Getzi see so the idea is to reduce memory pressure moreso than optimzie the speed#2022-10-2018:34favilae.g. org.apache.commons.lang3.StringUtils.containsIgnoreCase() which uses String.regionMatches under the hood{:tag :div, :attrs {:class "message-reaction", :title "pray"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙏")} " 1")}
#2022-10-2018:34favilayes#2022-10-2018:35Dustin Getzok, thanks#2022-10-2018:35favilaI mean, it is faster too#2022-10-2018:43favilaAlso if this is really all you are doing, and the number of emails is very large, it may be better to use d/datoms + filter directly because that will use much less memory (maybe passing all or part of the result as input to another query). Queries need to realize their result sets and can’t be computed lazily or incrementally.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 4")}
#2022-10-2018:53thumbnailI want to find the first t-value where one of a set of attributes is asserted (any of them is fine). Right now i query for any of the attributes, convert the tx->t and find the lowest number. But this scales pretry badly as the database increases.
Afaik theres no index i can use, so im considering other options. (Datomic client btw)#2022-10-2018:54Dustin Getztx is ordered as well isn't it?#2022-10-2018:56Dustin Getzhow about something like
(d/datoms (d/history db) {:index :aevt :components ...}){:tag :div, :attrs {:class "message-reaction", :title "thinking_face"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 1")}
#2022-10-2018:57Dustin Getzfirst datom for attr in :aevt index might be what you want assuming the e is all in the same partition and thus the ids increase (not sure how this works in cloud)#2022-10-2018:58Dustin Getzthe history db includes retractions, perhaps you want a regular db#2022-10-2019:04thumbnailfor context, right now i have this query:
(d/q {:query '{:find [(min ?t)]
:in [$ [?attr ...]]
:where [[?e ?attr _ ?tx]
[(datomic.api/tx->t ?tx) ?t]]}
:args [db relevant-keys]})#2022-10-2019:04thumbnailI’ll give d/datoms a try 🙂#2022-10-2019:42Dustin Getzon second thought, the history db may not be indexed, so hopefully the vanilla :aevt index has what you want and can answer this query efficiently – please report back what you find#2022-10-2019:44thumbnail(time (->> relevant-keys
(map #(first (d/datoms db {:index :aevt :components [%]})))
(reduce (fn [tt [a e v t]]
(min tt (datomic.api/tx->t t)))
Long/MAX_VALUE)))
This is working about 200x faster (30s to 150ms)#2022-10-2019:44Dustin Getzdoes it return correct answers? Lol#2022-10-2019:45thumbnailHaha yeah it returned the right answer 😅{:tag :div, :attrs {:class "message-reaction", :title "hugging_face"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 1")}
#2022-10-2019:45Dustin Getzi don't think you need tx->t either#2022-10-2019:46Dustin Getz(or rather you can call it once at the end if you need the basis in that form)#2022-10-2019:47thumbnailit’s only called once for every attribute (and it’s just a pure function)#2022-10-2019:48thumbnailbut i’ll clean this up at some point for sure 🙂.#2022-10-2101:37onetom@UHJH8MG6S u said u r using datomic client, right?
i can't find tx->t in that:
Unable to resolve var: datomic.api/tx->t in this context
Unable to resolve var: datomic.client.api/tx->t in this context
#2022-10-2101:38onetomalso, the tx values contain some partition number, which might not be monotonic as time goes on, so i think we can do min on the :tx dimension of datoms.
(i haven't checked this claim personally, just heard it from my colleague)#2022-10-2106:40thumbnail:thinking_face: think i have datomic-pro on the classpath, but using client. I need t because i have to feed it into tx-range. This code is used in a synchronization mechanism to elasticsearch (datomics fulltext isnt gdpr compliant)
I could run tx->t inside a query to keep it “pure”-datomic client#2022-10-2115:55jarethttps://forum.datomic.com/t/datomic-1-0-6527-now-available/2143{:tag :div, :attrs {:class "message-reaction", :title "tada"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🎉")} " 8")}
#2022-10-2116:25favilaWow, impressive changelog{:tag :div, :attrs {:class "message-reaction", :title "heavy_plus_sign"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("➕")} " 3")}
#2022-10-2118:25stuarthalloway@U09R86PA4 I hope you are going to like io-stats in particular!#2022-10-2206:22robert-stuttafordio-stats is FAANTAAASSTIIIIIIIIC @U072WS7PE @U1QJACBUM 👏{:tag :div, :attrs {:class "message-reaction", :title "heart"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("❤️")} " 1")}
#2022-10-2206:23robert-stuttafordit's a total gamechanger. we're in the process of integrating honeycomb and had plans to try to write some of these basics ourselves. now we can just stick all the io-stats into each trace!#2022-10-2212:39favilaSame! I was planning to instrument a db object next week and now I’m not sure about those plans. Unfortunately we still have a ton of entity-map code. I was hoping for a way to establish and return from an io-context independently of an eager querying function but I don’t notice that ability in the docs#2022-10-2213:05robert-stuttafordyeah the magic of entity bites again 😭#2022-10-2317:57prncIons question:
In the top level of a namespace I’m trying to
(set! *default-data-reader-fn* <my-fn>)
and seeing…
"Type": "java.lang.IllegalStateException",
"Message": "Can't set!: default-data-reader-fn from non-binding thread"
Probably something I don’t understand about dynamic vars, root vs thread bindings or about how datomic ions run our application — would appreciate any pointers or advice, thanks!#2022-10-2413:39hadilsGood morning! I have the following code:
(defn db-fixture
[f]
(dl/divert-system {:system "yardwerkz-dev" :storage-dir :mem})
(f)
(dl/release-db {:system "yardwerkz-dev"
:db-name datomic/database-name
:storage-dir :mem}))
(use-fixtures :each db-fixture)
It seems that in my unit tests, the entities that I put into the database (which should be diverted) are persisted to the cloud. When I run my tests two times in a row, I get a uniqueness constraint. I am sure I have made a mistake. Can anyone help me? Thanks.#2022-10-2415:00pppauli don't use cloud, but on prem the db uri has to look something like "datomic:<mem://memorydb>" for in memory db to work#2022-10-2719:00onetom@UGNMGFJG3 why don't u use dev-local for testing purposes?
https://docs.datomic.com/cloud/dev-local.html#2022-10-2414:43IMHi all,
Are queries (`d/q`) on d/history db optimized in some way? Or do they cause seeking through all assertions/retractions?#2022-10-2414:58pppaulpretty sure there is order on tx id and tx date#2022-10-2415:00robert-stuttafordit still uses the same indexes in their sort orders eavt aevt avet vaet - but they contain more datoms because all true and all false instead of latest-only-true datoms are present{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2022-10-2415:01robert-stuttafordgiven that t is in this ordering, that means oldest stuff is fastest to work with cos its at the top of the index when you look#2022-10-2415:02robert-stuttafordthis will only feel slow if you're working with large numbers of entities, but if e.g. looking at the history of a single entity, it should feel not substantially slower{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-10-2415:02robert-stuttaford(sorry end of day brain, not speaking too clearly 😅 )#2022-10-2415:06IMRight, so considering eavt index, something like this should be pretty fast at all times? (as e and a are given)
...
:in $ ?eid ?e-attr ?asserted-value
:where
[?eid ?e-attr ?asserted-value ?tx true]
...
#2022-10-2415:07pppaulthat's something you need to profile. fast is subjective#2022-10-2415:08pppaulthere are a lot of scenarios where a full index scan is considered fast enough#2022-10-2415:11pppaulthat query is going to cut the DB down a lot, but if your DB is being abused and you have 1 billion records that satisfy it, then you could end up in pain town too.#2022-10-2415:13pppauli think there may be other ways to find all the transactions for an eid, though.#2022-10-2415:16IMIt's not looking for all transaction for an entity, it's looking for a transaction on a specified entity and attribute with specific assertion value.#2022-10-2415:17IMConsidering messages above, it sounds like it should be pretty efficient#2022-10-2415:23pppaulit should be pretty fast, but you should also be profiling as well. cus a slow query is ok in some places, and a fast query isn't fast enough in others. you can also seed your DB with a ton of data and try your query on it, that'll sort of give you an estimate of how it'll perform in the scenarios you test. i do this when trying to figure out how long my brute force solutions will work for.{:tag :div, :attrs {:class "message-reaction", :title "point_up"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("☝️")} " 1")}
#2022-10-2419:47robert-stuttafordif you have e a and v and you want all the t then history will be fast enough imho 🙂#2022-10-2512:03frankitoxHi! Is there any caching going on in peers? Is there any way to force that caching? I'm having some problems with new deployments. When I deploy I initialize around 15 new machines that connect to the database and the first responses from the webserver take really long#2022-10-2512:03frankitoxCharting this responses correlates with a 'Read usage' metric in DynamoDB, so I'm thinking the problem might be that each new server asks data from the transactor which then pulls from Dynamo.#2022-10-2513:05Joe Lane@U3UFFB420 check out https://docs.datomic.com/on-prem/overview/caching.html
Your options are memcached or valcache.#2022-10-2517:39ennnit: the peer server is not reading from the transactor, it's reading directly from Dynamo.
In addition to adding another caching layer like memcached or valcache, if you know the data that the peer is going to need to serve web requests, you can try to force it to be loaded into the object cache by reading that data in your app before it starts handling traffic.#2022-10-2619:31frankitox@U060QM7AA so peers may read straight from Dynamo? Yes, that seems like the more sensible idea. Something like preloading the database may help.#2022-10-2619:43ennThat’s my understanding. Writes are centralized through the transactor but each peer can read from storage.#2022-10-2619:51frankitoxThank you!#2022-10-2518:27jdkealywhat would happen if someone manually edited a record in dynamodb table ?#2022-10-2518:36ghadithat's a bad kitty#2022-10-2518:46jdkealywould that require an entire database restore ?#2022-10-2518:48ghadipotentially#2022-10-2518:51Joe Lanedid this actually happen? if so, contact support.#2022-10-2619:30jdkealyno. i just saw someone clicking around and inspecting records in AWS dynamo console, and i noticed the update button, and it dawned on me…. “this would be really easy to make my day horrible”#2022-10-2719:01onetomthat's a good point... i think we are also overly liberal with our AWS access permissions...#2022-10-2520:49Dustin GetzFor the entity API, is the perf cost of not maximizing reuse of the entity ref an issue? By which I mean only locally saving the ref returned by (d/entity db e), but in larger scopes passing around scalar ids and re-establishing the entity ref?#2022-10-2520:51Dustin GetzI understand that the entity essentially memoizes its access to (d/datoms :eavt e) but at what scale does this start to matter especially with SSD cache 1ms away?#2022-10-2520:57favilaIt really only pays off if you repeat reads to the same attributes, because that guarantees you will never go even to the object cache. If you’re always reading new attributes there’s no difference.#2022-10-2520:58favilad/touch forces all attributes to be read if you want to do that eagerly#2022-10-2520:58Dustin Getzdo you know what the cost difference is between object cache and SSD?#2022-10-2521:00favilaIt’s the difference between pure B+tree pointer chasing (assuming every pointer is loaded) vs io scheduling and decoding fressian into objects on miss{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-10-2521:00Dustin GetzAnd is it true that for "large" dbs that the object cache might be entirely evicted and rebuilt on a request to request basis due to the object cache being significantly smaller than the actual indexes?#2022-10-2521:01favilaif your entire workload does not fit in OC, something will get evicted#2022-10-2521:01favilaalso new indexes create new segments, and are an inherent source of eviction{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-10-2521:01Dustin Getzis that common for the workload to not fit in OC ina prod cofiguration (so permitting eng to partition the query load across multiple query boxes if that is a thing people do to make datomic fast)#2022-10-2521:02favilaYou’re asking if people use big or small dbs with datomic. 🤷{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-10-2521:03favilaI know that for us, we have 17+ billion datom db, and routing request loads for locality became essential for performance at < 4 billion datoms#2022-10-2521:04Dustin Getzand is it even possible to make it so OC is mostly not thrashing every request#2022-10-2521:04favilabut we have a lot of dumb select *-like workloads which probably thrash the cache horribly anyway, and we have new indexes every 10-15 minutes{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-10-2521:05favilayes it is, certainly, by controlling your read workload and locality{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-10-2521:05Dustin Getzthanks#2022-10-2521:05favilawatch your object-cache hitrate{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-10-2614:27pppaulis it possible to add :db/unique :db.unique/identity to a schema that's already in the DB? when i do i get errors related to index#2022-10-2614:29favilait is possible. https://docs.datomic.com/on-prem/schema/schema-change.html#2022-10-2614:29favilaif you are on on-prem and the attribute does not have a value index yet, you need to add one first so it can verify uniqness#2022-10-2614:30favilacloud has a value-index on everything already#2022-10-2614:32pppaulhttps://docs.datomic.com/on-prem/schema/schema-change.html#adding-avet-index that should be good enough?#2022-10-2614:34favilayes. Note this section https://docs.datomic.com/on-prem/schema/schema-change.html#altering-attribute-to-unique
> In order to add a unique constraint to an attribute, Datomic must already be maintaining an AVET index on the attribute, or the attribute must have never had any values asserted. Furthemore, if there are values present for that attribute, they must be unique in the set of current assertions. If either of these constraints are not met, the alteration will not be accepted and the transaction will fail.#2022-10-2614:34favila(I”m assuming through all of this that you are using on-prem--if you are using cloud you may have a different problem)#2022-10-2614:40pppauli'm on prem#2022-10-2614:52thumbnailIs it possible to fetch just the datomic-transactor-pro-<version>.jar from my.datomic ?#2022-10-2614:53thumbnailfor reference, for datomic-pro (and the other deps) this just works: wget --http-user=$DATOMIC_REPO_USER --http-password=$DATOMIC_REPO_PASS #2022-10-2614:56thumbnailcontext: we need to bump the postgresql adapter to something recent so we can use modern password strategies (iirc). We run datomic on Nomad (i.e. in a docker container) and built our own.
We need to include some jars in the installation anyway for prometheus metrics and custom database functions.
currently we fetch a datomic installation, manually fetch a bunch of jars to update and replace them in ./lib.
We figured we could just write a pom.xml to ‘bump’ the postgres dependencies and include our own, and simply switch to jib to built the Dockerfile. But we hit a snag here because we can’t actually fetch the datomic-transctor-pro jar.#2022-10-2614:59favilahow can you confidently write a pom.xml that accurately reflects everything a datomic transactor needs?#2022-10-2615:00favilaIMO patching lib is the only safe thing to do#2022-10-2615:00favila(It’s what we do, then we stash the updated zip on s3)#2022-10-2615:01favilaand this is even being 90% sure that most of the bytes in lib are just transient-deps of requiring the entire aws sdk and are not used.#2022-10-2615:39thumbnailI figured id use the pom that comes with the transactor 😅, its bundled#2022-10-2615:39thumbnailAnd refer to it in my own pom, using managed deps to make sure i get the deps i want#2022-10-2615:44favilaAre you referring to the pom.xml in the root of the zip? That appears to be for the client (note the artifactId).#2022-10-2617:42thumbnailThe pom inside the transactor-jar (under meta INF)#2022-10-2617:43thumbnailI just hoped it would be available to pull from my.datomic 😁, orherwise well have to do the unpacking and lib patching, but it started to grow too big for my liking,#2022-10-2618:51Daniel JompheHi Cognitect, we might have found a disparity between Client API's pull and dev-local's pull.
(d/pull db '[*] nil) ; eid arg is nil, what happens next?
• dev-local's pull accepts a nil eid and makes this pull return nil.
• cloud's pull raises an exception about the nil eid.
Might we have a dev-local behavior identical to the cloud for this, please?#2022-10-2618:51Daniel JompheNow, I'm sure Cognitect will tell me I forgot something obvious... 😬#2022-10-2619:15ghadiI think this is worth raising in the official forum#2022-10-2619:16Daniel Jompheforum or ask?#2022-10-2619:17ghadiask, I think{:tag :div, :attrs {:class "message-reaction", :title "face_with_hand_over_mouth"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 1")}
#2022-10-2619:30Daniel JompheYou asked, https://ask.datomic.com/index.php/790/is-there-disparity-in-d-pull-between-dev-local-and-client-api. Thanks @U050ECB92. :)#2022-10-2619:20Daniel JompheSorry, I felt the need to put a bit of humour into your day. 😆{:tag :div, :attrs {:class "message-reaction", :title "rolling_on_the_floor_laughing"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 1")}
#2022-10-2619:46ennI think there is a mistake in this documentation: https://docs.datomic.com/on-prem/overview/architecture.html. The bullet points under the “Peer Server” and “Transactor” headings are identical (I believe they describe transactors, not peer servers.){:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-10-2619:50jaretI think I see what we did there. An org mode error. The section is missing the following bullet points:
The ${Peer} server is a JVM process that provides an interface for the Datomic Client library.
- Accepts queries and ${transaction}s from Datomic Clients
- Submits transactions to, and accepts changes from, the ${transactor}
- Provides data access, caching, and ${query} capability to connected Clients, reading from the ${storage service} as needed#2022-10-2619:51jaretI will get this updated once I untangle how it happened.#2022-10-2619:51jaretThank you for the report!#2022-10-2620:36frankitoxCan I use the same memchached node (ElastiCache) for the transactor and a peer?#2022-10-2620:55Joe LaneYup 👍{:tag :div, :attrs {:class "message-reaction", :title "grin"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😁")} " 1")}
#2022-10-2623:39jaretI saw your first question before you deleted it. I just wanted to add that while there is currently no way (api or other tooling) to heat up a new cache you can use a shared memcached for transactor and multiple peers. This pattern enables new peers to have some relevant information ready in the shared cache. Particularly useful if you have a dedicated peer walking the indexes or a reporting type peer process that might touch all data.
I am happy to answer more questions about caching in Datomic if you have a need. <mailto:/cdn-cgi/l/email-protection|/cdn-cgi/l/email-protection> or DM me.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-10-2814:56Daniel JompheIs our IDE's pretty-printing shortening what we see of what Datomic stores about these floats?
Or are they really rounded up like this by Datomic?#2022-10-2815:03Daniel Jomphe• The rounding up is normal (pic below), but is there also some truncation, or
• is there no truncation and only the IDE's pretty-printing fooling us (pic above)?#2022-10-2815:40faviladatomic has both double and float types, but clojure literals and math functions only use doubles (floats will be widened to doubles).#2022-10-2815:40favilaIt looks to me like the input is a double but the schema type is float#2022-10-2815:40favila(float -72.72186717063904)
=> -72.72187
#2022-10-2815:41favilaso what is being printed in the datom is likely a float#2022-10-2815:42favilathat datom value will probably be widened at some point if you manipulate it at all, and will look like this:#2022-10-2815:42favila(double (float -72.72186717063904))
=> -72.72187042236328
#2022-10-2815:42favilahttps://docs.datomic.com/on-prem/schema/schema.html#value-types#2022-10-2815:43favilaI strongly suspect you used :db.type/float but you actually wanted :db.type/double#2022-10-2815:43Daniel JompheYeah, this is what my IDE prints, gonna look if it's true that our tx value was a double, not a float#2022-10-2815:44Daniel JompheYes, our schema#2022-10-2815:44favilayeah, so the double value in the transaction was narrowed to float to fit into the datom#2022-10-2815:45favilaand what you are printing in the datom is a float (a boxed Float object)#2022-10-2815:46favilaYou can confirm with (-> mydatom :v class)#2022-10-2815:53Daniel JompheOk, so the browser client lib gives us a Number, which is a 64 bit floating point, which we converted to float, a 32 bit floating point, due to our schema being float instead of double...#2022-10-2815:53Daniel JompheAnd some areas of our code also convert it to float explicitly.
We'll have to remodel, I suppose.#2022-10-2815:53Daniel JompheI knew Datomic was doing what it should. phew! 😅
Thanks a lot Francis!#2022-10-2815:54favilaThat’s a good point, if this value gets into a browser it’s going to be widened no matter what.#2022-10-2815:54favilayou probably just want doubles all the way through{:tag :div, :attrs {:class "message-reaction", :title "relaxed"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("☺️")} " 2")}
#2022-10-2815:03Daniel Jomphe• The rounding up is normal (pic below), but is there also some truncation, or
• is there no truncation and only the IDE's pretty-printing fooling us (pic above)?#2022-10-2817:18dogenpunkSorry for the cross post, but this is probably the better forum. Does Datomic Cloud not run Clojure 1.11? I’m getting “Unable to resolve symbol: parse-uuid in this context” when deploying an Ion system. I’ve tried adding an explicit dependency on 1.11 with the same results.#2022-10-2817:28Joe Lanehttps://docs.datomic.com/cloud/changes.html#973-9132
@U0E0553EF As of the latest release we run Clojure 1.11#2022-10-2817:29dogenpunkOk. Weird. I just created this stack yesterday.#2022-12-0201:40onetomhow was this issue solved, btw?
i usually see this kind of behaviour, if i have some .class files getting into the bundled ion code, which are older, than their corresponding source code.
the bundle is a zip file and somehow all the file creation and modification dates can become the same date-time, which is the creation time of the zip file itself.
in such case, clojure would prefer to load the *.class file, instead of compiling its corresponding source file.
to debug this, u should try to clojure -M:ion-dev "{:op :push :uname whatever :creds-profile <your-aws-profile> :region <your-region>}", then look at unzip -l .datomic-ions/datomic/apps/<:app-name from ion-config.edn>/unrepro/whatever.zip| less`
it shouldn't have any *.class files in it#2022-12-0217:28dogenpunk@U086D6TBN The issue is that creating a new stack from the AWS Marketplace page results in a stack using an older template. If memory serves, updating the Marketplace page is a slow process. I haven’t gotten around to updating my system yet, but I’m relatively confident that updating the system will resolve this.#2022-12-0404:40onetomah, i see. i haven't used the marketplace facilities for a long time.
we built some clojure tooling (using the cognitect aws lib) to drive cloudformation create/delete/update operations.
this is how our cloudformation "control center" looks like a in a REPL NS:
;; Common stack operations
(comment
(defn $stack [] ($env cfg/dcs)) ;; Storage
(defn $stack [] ($env cfg/dcs-xxx)) ;; Compute
(defn $stack [] ($env cfg/apigw))
...
(-> ($stack) (grep> "aud") #_sort)
(-> ($stack) create-stack req!)
(-> ($stack) update-stack req!)
(-> ($stack) (stack-completed? 10))
(-> ($stack) describe-stack req! :StackStatus)
(-> ($stack) stack-outputs req!)
(-> ($stack) stack-outputs req! (grep> "node"))
(-> ($stack) stack-resources req! #_(resources-with-status "PROGRESS"))
(-> ($stack) stack-resources req! (grep> "instance"))
(-> ($stack) stack-resources req! vals (->> (grep #"endpoint" :LogicalResourceId))
(set/project [:LogicalResourceId :PhysicalResourceId]) print-table)
(-> ($stack) (merge DescribeStackEvents) req! stack-events-summary
(->> (take 20)) print-table)
;; (-> ($stack) delete-stack req!)
)
#2022-10-3116:33joshkhthe Datomic Cloud guide for https://docs.datomic.com/cloud/operation/growing-your-system.html#primary-compute-group Compute Group> mentions:
> If your system serves a high write volume across more than one database, you may want to run more than two instances in your primary compute group. Please <mailto:/cdn-cgi/l/email-protection|contact Datomic support> with your specific needs and we can guide you.
are they any rough guidelines or thresholds to determine if your system is considered high write volume? thanks and happy halloween 👻!#2022-11-0217:27stopaHey team, question: how does clustering work in join queries for datomic? For example, say I want to do this:
[1 :posts ?pid]
[?pid :title ?title]
Afaik, datomic would do two index lookups:
1. EAV index [1 :posts] to find the set of ?pid
2. Implicit join with EAV index, which would look up N ?pid, and find the corresponding ?title
My question is for 2. — how would the caching work? If there are N ?pid , we may end up fetching ~N different segments into memory. (Unless there is some kind of clustering)#2022-11-0217:40favilaQuery generally prefers AEVT#2022-11-0217:41favilaUsing the A if it’s known provides a kind of clustering/locality akin to what a column-oriented db would give#2022-11-0217:42favilabut yes, worst case, you could still have so many ?pid , spread over such a long time (so that their entity-ids are not at all contiguous) that you fetch nearly N segments{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-11-0217:42favilapartitions are a mechanism to control this#2022-11-0217:42favilaby enforcing a sort order#2022-11-0217:43favilahttps://docs.datomic.com/on-prem/schema/schema.html#partitions#2022-11-0217:44favilawhen you create an entity via a tempid, you can supply a partition; the partition id becomes the high bits of the entity id. By putting frequently-read-together entities into the same partition, you increase the chance that you will fetch significantly less than N segments for N items.#2022-11-0217:44favilabut this is not automatic, and you cannot alter an entity’s partition after creation.#2022-11-0217:45stopaReally interesting, thank you @U09R86PA4!#2022-11-0316:21stopaCurious question: Is there any database that solves this problem? Would love to learn how they approach it.#2022-11-0316:23favilamature sql databases often allow you to partition rows according to some criteria. The point of this is to put like rows into the same physical storage silos. (It isn’t quite the same, but you can use it to solve the same kinds of problems)#2022-11-0316:23favilae.g. postgres https://www.postgresql.org/docs/current/ddl-partitioning.html#2022-11-0316:24favilaThere’s also often similar knobs on individual indexes.#2022-11-0316:30stopaGotcha, thank you @U09R86PA4!#2022-11-0316:45BenjaminShould I model a google doc with "headings" component, or the headings with a ref attribute to their "parent" doc? Or what are the kinds of things you think about when deciding#2022-11-0416:03Drew Verleemy logs are showing that a Datomic Transactor, which was expected to be there, isn't available to the Datomic Peer.
How would you go about debugging this?
My first thought in glancing at the code is that I would want to know the connection info to make sure it wasn't somehow getting the wrong configuration (e.g the location of the transactor).
I see a call like this (datomic.api/log conn) I say "like" because there is indirection through potemkin e.g (potekin/import-vars [datomic.api log]) that i'm not 100% how to read. And when i looked at the datomic docs, i can't find a datomic.api/log function.#2022-11-0416:21faviladatomic connects to storage, pulls the host= and alt-host= values (written into transactor.properties), and tries both of them to connect to the transactor.#2022-11-0416:22favilaif it can’t connect to storage, you’ll get some exception from your storage driver (e.g. from dynamo if you are using dynamo)#2022-11-0416:22favilaif it can’t connect to the transactor, you’ll get an exception thrown from artemis#2022-11-0416:22favilausing that you can bisect which thing is wrong#2022-11-0416:24favilaon the datomic side, the only things that can be wrong are bad connection string or a bad or missing alt-host value (assuming the txor is running fine)#2022-11-0416:25favilaeverything else is going to be something in your network stack that’s preventing peers from talking to storage or transactor or both{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2022-11-0416:26favila(The purpose of alt-host= is to give a host value that peers could use to connect to the transactor, in case the host= value that the transactor uses to bind its listen is not routable to by peers)#2022-11-0501:49Drew Verleethanks again. Ill have to think about that a bit :thinking_face:#2022-11-0614:27prncHi 👋
When developing on my local machine connected to datomic cloud (ions), I sometimes get
#error {
:cause "Abruptly closed by peer"
:via
[{:type javax.net.ssl.SSLHandshakeException
:message "Abruptly closed by peer"
:at [org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint fill "SslConnection.java" 769]}]
:trace
<...>}
after, I guess, a period of inactivity on that connection -- is that a ‘normal’ thing?
(happy to try and provide more detail as I investigate)
Thanks!#2022-11-0614:27prncHere is an extended stacktrace
{:cognitect.anomalies/category :cognitect.anomalies/fault, :cognitect.anomalies/message "Abruptly closed by peer", :cognitect.http-client/throwable #error {
:cause "Abruptly closed by peer"
:via
[{:type javax.net.ssl.SSLHandshakeException
:message "Abruptly closed by peer"
:at [org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint fill "SslConnection.java" 769]}]
:trace
[[org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint fill "SslConnection.java" 769]
[org.eclipse.jetty.client.http.HttpReceiverOverHTTP process "HttpReceiverOverHTTP.java" 164]
[org.eclipse.jetty.client.http.HttpReceiverOverHTTP receive "HttpReceiverOverHTTP.java" 79]
[org.eclipse.jetty.client.http.HttpChannelOverHTTP receive "HttpChannelOverHTTP.java" 131]
[org.eclipse.jetty.client.http.HttpConnectionOverHTTP onFillable "HttpConnectionOverHTTP.java" 172]
[org.eclipse.jetty.io.AbstractConnection$ReadCallback succeeded "AbstractConnection.java" 311]
[org.eclipse.jetty.io.FillInterest fillable "FillInterest.java" 105]
[org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint onFillable "SslConnection.java" 555]
[org.eclipse.jetty.io.ssl.SslConnection onFillable "SslConnection.java" 410]
[org.eclipse.jetty.io.ssl.SslConnection$2 succeeded "SslConnection.java" 164]
[org.eclipse.jetty.io.FillInterest fillable "FillInterest.java" 105]
[org.eclipse.jetty.io.ChannelEndPoint$1 run "ChannelEndPoint.java" 104]
[org.eclipse.jetty.util.thread.strategy.EatWhatYouKill runTask "EatWhatYouKill.java" 338]
[org.eclipse.jetty.util.thread.strategy.EatWhatYouKill doProduce "EatWhatYouKill.java" 315]
[org.eclipse.jetty.util.thread.strategy.EatWhatYouKill tryProduce "EatWhatYouKill.java" 173]
[org.eclipse.jetty.util.thread.strategy.EatWhatYouKill run "EatWhatYouKill.java" 131]
[org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread run "ReservedThreadExecutor.java" 409]
[org.eclipse.jetty.util.thread.QueuedThreadPool runJob "QueuedThreadPool.java" 883]
[org.eclipse.jetty.util.thread.QueuedThreadPool$Runner run "QueuedThreadPool.java" 1034]
[java.lang.Thread run "Thread.java" 834]]}}
:via
[{:type clojure.lang.ExceptionInfo
:message "Abruptly closed by peer"
:data {:cognitect.anomalies/category :cognitect.anomalies/fault, :cognitect.anomalies/message "Abruptly closed by peer", :cognitect.http-client/throwable #error {
:cause "Abruptly closed by peer"
:via
[{:type javax.net.ssl.SSLHandshakeException
:message "Abruptly closed by peer"
:at [org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint fill "SslConnection.java" 769]}]
:trace
[[org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint fill "SslConnection.java" 769]
[org.eclipse.jetty.client.http.HttpReceiverOverHTTP process "HttpReceiverOverHTTP.java" 164]
[org.eclipse.jetty.client.http.HttpReceiverOverHTTP receive "HttpReceiverOverHTTP.java" 79]
[org.eclipse.jetty.client.http.HttpChannelOverHTTP receive "HttpChannelOverHTTP.java" 131]
[org.eclipse.jetty.client.http.HttpConnectionOverHTTP onFillable "HttpConnectionOverHTTP.java" 172]
[org.eclipse.jetty.io.AbstractConnection$ReadCallback succeeded "AbstractConnection.java" 311]
[org.eclipse.jetty.io.FillInterest fillable "FillInterest.java" 105]
[org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint onFillable "SslConnection.java" 555]
[org.eclipse.jetty.io.ssl.SslConnection onFillable "SslConnection.java" 410]
[org.eclipse.jetty.io.ssl.SslConnection$2 succeeded "SslConnection.java" 164]
[org.eclipse.jetty.io.FillInterest fillable "FillInterest.java" 105]
[org.eclipse.jetty.io.ChannelEndPoint$1 run "ChannelEndPoint.java" 104]
[org.eclipse.jetty.util.thread.strategy.EatWhatYouKill runTask "EatWhatYouKill.java" 338]
[org.eclipse.jetty.util.thread.strategy.EatWhatYouKill doProduce "EatWhatYouKill.java" 315]
[org.eclipse.jetty.util.thread.strategy.EatWhatYouKill tryProduce "EatWhatYouKill.java" 173]
[org.eclipse.jetty.util.thread.strategy.EatWhatYouKill run "EatWhatYouKill.java" 131]
[org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread run "ReservedThreadExecutor.java" 409]
[org.eclipse.jetty.util.thread.QueuedThreadPool runJob "QueuedThreadPool.java" 883]
[org.eclipse.jetty.util.thread.QueuedThreadPool$Runner run "QueuedThreadPool.java" 1034]
[java.lang.Thread run "Thread.java" 834]]}}
:at [datomic.client.api.async$ares invokeStatic "async.clj" 58]}]
:trace
[[datomic.client.api.async$ares invokeStatic "async.clj" 58]
[datomic.client.api.async$ares invoke "async.clj" 54]
[datomic.client.api.sync.Client connect "sync.clj" 92]
[datomic.client.api$connect invokeStatic "api.clj" 146]
[datomic.client.api$connect invoke "api.clj" 133]#2022-11-0614:30prncI guess I could treat this as a retryable anomaly?
Would appreciate any advice on this 🙂#2022-11-0616:22prncAlso there seems to be some chemistry flavoured spamming going on https://ask.datomic.com/?{:tag :div, :attrs {:class "message-reaction", :title "laughing"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("😆")} " 2")}
#2022-11-0618:47Alex Miller (Clojure team)Thx I’ll let people know{:tag :div, :attrs {:class "message-reaction", :title "heart"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("❤️")} " 1")}
#2022-11-0919:47Setzer22Hi! 👋 I'm trying to set up datomic with Postgres, and I believe I set up everything correctly, but when I start my system datomic fails with the following error:
Terminating process - Lifecycle thread failed
java.util.concurrent.ExecutionException: org.postgresql.util.PSQLException: ERROR: relation "datomic_kvs" does not exist
Position: 31
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at clojure.core$deref_future.invokeStatic(core.clj:2304)
at clojure.core$future_call$reify__8477.deref(core.clj:6976)
at clojure.core$deref.invokeStatic(core.clj:2324)
at clojure.core$deref.invoke(core.clj:2310)
at datomic.lifecycle_ext$standby_loop.invokeStatic(lifecycle_ext.clj:42)
at datomic.lifecycle_ext$standby_loop.invoke(lifecycle_ext.clj:40)
at clojure.lang.Var.invoke(Var.java:384)
at datomic.lifecycle$start$fn__3330.invoke(lifecycle.clj:73)
at clojure.lang.AFn.run(AFn.java:22)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.postgresql.util.PSQLException: ERROR: relation "datomic_kvs" does not exist
That's really strange, because I made sure to initialize my database with the provided scripts, and the table does seem to exist when I check via psql:
{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "4c3c233f382b3e293f0c7a7e75282f7a757975297b7b"}, :content ("[email protected]")}
Any idea what could be going on?#2022-11-0919:53ghadidoublecheck your connection string#2022-11-0920:03Setzer22Seems to be correct :thinking_face: But after investigating a bit more, I realized psql doesn't show the table when I run with the datomic user, so the problem seems to be there.#2022-11-0920:08ghadithere you go{:tag :div, :attrs {:class "message-reaction", :title "face_with_raised_eyebrow"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 1")}
#2022-11-0920:09Setzer22Yes, but, I don't know what extra permissions I'm supposed to give it. I'm just running the provided init scripts which seem to grant the right access on the tables.#2022-11-0920:09Setzer22I'm not sure if there's a recommended postgresql version to use with datomic, I have the feeling picking latest (15.0) may not have been the best choice here#2022-11-0921:38Setzer22Looks like I was able to get some progress. The table was not being created inside the datomic database due to how I was running the initialization scripts. Unfortunately, there are more issues 😩#2022-11-0921:40Setzer22I set up my code to use the following uri to connect to datomic: datomic:<POSTGRES-HOST>:5432/datomic?user=datomic&password=<PASSWORD> (where <POSTGRES-HOST> and <PASSWORD> are redacted for obvious reasons, but have correct values). When doing that, I'm getting this error:
> ActiveMQNotConnectedException[errorType=NOT_CONNECTED message=AMQ219007: Cannot connect to server(s). Tried with all available servers.]#2022-11-0921:42Setzer22does that ring any bells?#2022-11-1017:21Ivar RefsdalDid you set alt-host?
It's mentioned in the docs somewhere...
I have a working docker-compose setup here:
https://github.com/ivarref/spa-monkey/blob/main/docker-compose.yml
(requires the folder datomic from that repo as well.)
It has an init script that creates datomic_kvs...:
https://github.com/ivarref/spa-monkey/blob/main/datomic/init
(I agree that setting up datomic is way harder than it should be IMHO)#2022-11-1017:24Ivar RefsdalNote that in this setup I am not persisting any postgres data to an actual volume, so you will loose all work/changes when docker exits...#2022-11-1017:25fmnoiseHi everyone, I have a question about serial/parallel transactions processing. If I have multiple databases running on single transactor, does it makes it any possible to runs txs in parallel to separate dbs?#2022-11-1017:26fmnoiseI remember datomic transactions are not parallel by design to omit locks and other complicated stuff, but still curious if it's per db or for the whole transactor#2022-11-1017:27Joe Laneper-db#2022-11-1017:27fmnoisethanks 🙏#2022-11-1017:30Joe LaneNot sure if you're planning to use on-prem or cloud, but I recommend reading this post and making sure what you're planning to do aligns with the recommendations from marshall and rich
https://forum.datomic.com/t/multi-tenancy-databases/238{:tag :div, :attrs {:class "message-reaction", :title "pray"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("🙏")} " 1")}
#2022-11-1017:30fmnoiseit's on-prem#2022-11-1018:20jjttjjI followed the datomic cloud setup guide, but when attempting to connect to the system I get an error
;;; deps.edn
{:deps
{com.datomic/dev-local {:mvn/version "1.0.243"}
com.datomic/client-cloud {:mvn/version "1.0.119"}}
:mvn/repos {"datomic-cloud" {:url ""}}}
(def client1 (d/client {:system "mysystem1234"
:server-type :ion
:region "us-east-1"
:endpoint ""}))
ExceptionInfo Unable to connect to
{:cognitect.anomalies/category :cognitect.anomalies/unavailable,
:cognitect.anomalies/message "DNS timeout 5000 ms",
:config {:system "mysystem1234",
:server-type :cloud,
:region "us-east-1",
:endpoint "",
:endpoint-map {:headers {"host" ""},
:scheme "http",
:server-name "",
:server-port 8182}}}
Any tips on how to start debugging this?#2022-11-1018:51Joe LaneCheck your template outputs via aws cloudformation describe-stacks --stack-name <compute-stack> and look for your ClientApiGatewayEndpoint in the output and grab the https url value. Replace your
""
with the ClientApiGatewayEndpoint.#2022-11-1018:51Joe LaneAlso, can you provide a link to the "setup guide" you're following?#2022-11-1018:53Joe LaneReading through https://docs.datomic.com/cloud/tutorial/client.html describes the ClientApiGatewayEndpoint .#2022-11-1018:56jjttjjI was looking at https://docs.datomic.com/cloud/getting-started/start-system.html
I don't seem to have ClientApiGatewayEndpoint in the compute stack output#2022-11-1018:58jjttjj(This is using solo topology by the way)#2022-11-1018:59Joe Lanethere is no such thing as a solo topology anymore, we consolidated solo+production topology while supporting scaling down to the original "solo" size (and cost).#2022-11-1019:00jjttjjStrange, it's still listed as an option when you step through the aws marketplace listing#2022-11-1019:01jjttjjOhhh or do you mean that solo now just refers to a smaller sized instance (etc)#2022-11-1019:01Joe Laneugh... so frustrating.
I'll give you the right link#2022-11-1019:01Joe Lanehttps://docs.datomic.com/cloud/releases.html#2022-11-1019:03Joe LaneOnce you "subscribe" to datomic-cloud in the aws marketplace you can create a system by using the storage and compute stacks under the "Current Releases" section of that link.#2022-11-1019:04jjttjjGotcha, will give that a try, thanks!#2022-11-1019:04Joe LaneIf you get stuck again please reach out. (unfortunately we are currently unable to remove the solo topology from the drop-down in the aws marketplace)#2022-11-1019:46jjttjjThat worked, thanks!
Small point on the docs, it would be useful to mention somewhere on https://docs.datomic.com/cloud/operation/new-system.html#storage that the stack name of the storage stack is the system name.
(I see it's mentioned https://docs.datomic.com/cloud/operation/storage-template.html#stack-name). That was my one snag#2022-11-1100:27jjttjjAnyone ever get datomic ions/`ion-dev` working from windows?
Currently getting this when trying to clojure -A:ion-dev '{:op :push}':
{:command-failed "{:op :push}",
:causes
({:message "Unable to transform path",
:class ExceptionInfo,
:data
{:home "C:\\Users\\me",
:prefix "C:\\Users\\me/.m2/repository", ;; mixing the slashes might be the issue?
:resolved-coord
{:mvn/version "1.0.362",
:deps/manifest :mvn,
:paths
["C:\\Users\\me\\.m2\\repository\\com\\cognitect\\transit-java\\1.0.362\\transit-java-1.0.362.jar"],
:dependents [com.cognitect/transit-clj]}}})}#2022-11-1107:49CaseyIf I plan to store dates (YYYY-mm-dd) in datomic and query against them often (using < and >), is it better to store them as serialized strings or as instants with the time component set to something like midnight?#2022-11-1108:09Lennart BuitDatomic can compare j.u.Date objects natively. I dont think thats the case for strings#2022-11-1108:10Lennart Buitsadly Datomic doesnt support java.time.LocalDate just yet#2022-11-1108:13Casey:in $ ?reference-date
:where
[?e :my/date ?date]
[(< ?date ?reference-date)]
I know from experience queries like this work for j.u.Dates :db.type/instant, but you think that won't work for :db.type/string?#2022-11-1108:20Lennart BuitYou can just try, and apparently I was wrong:
(d/q '{:find [?e]
:where [[?e :valid/from ?a]
[?e :valid/to ?b]
[(< ?a ?b)]]}
[[1 :valid/from "2020-01-01"]
[1 :valid/to "2022-01-01"]])
=> #{[1]}#2022-11-1108:20Lennart Buit(this vector-of-tuples-datasource only works in peer)#2022-11-1108:26CaseyHm ok, that is good to know, thanks! Back to my original question.. is there a significant perf difference between the two?#2022-11-1110:53Lennart BuitI don’t know the answer to that, sorry{:tag :div, :attrs {:class "message-reaction", :title "heart"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("❤️")} " 1")}
#2022-11-1114:51favilaYou could do tuples of ints or ints, eg [2020 1 1] or 20200101#2022-11-1114:52favila“Performance” needs context—performance for what#2022-11-1114:55favilaI wouldn’t store them as juD with zeroed out time part just because it’s so easy for them to pick up extra precision and drift. You could use an attribute predicate to make sure it stays zero#2022-11-1407:41Linus EricssonYou can also store dates as epoch days.#2022-11-1317:45Victor InacioI’m trying to use Datomic analytics on Metabase, but after loading forever I’m checking schema and tables with presto CLI.
When getting columns from this table using small dev db I’m getting results but from a larger database with same schema I’m getting this error about ‘db must support tuples’
And I have no :db.type/tuple in my schema.
DESCRIBE Tenant;
Query 20221113_173617_00004_a7wmw failed: Assert failed: db must support tuples - see
(:db/tupleTypes ret)
My metaschema.edn is:
{:tables {:tenant/name {:name Tenant}}
:joins {}}
Any clues why?#2022-11-1414:26danierouxThe small dev db has the latest base schema, and the larger database does not.
You need to upgrade the larger database's base schema with: https://docs.datomic.com/cloud/operation/howto.html#upgrade-base-schema{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-11-1608:56Victor InacioOw, thanks @U9E8C7QRJ that was exactly the case, after upgrading the large one it worked normally.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-11-1414:06ivanaHello! This query works, but only if I have some status attached to the entity. Is there a way to get some defolt value for transaction and it's attribute if status wasn't set to an entity?
(d/q '[:find ?e ?status ?last-tx ?caused-by
:in $ [?e ...]
:where
[?e :encounter-transmission/plan]
[(get-else $ ?e :encounter-transmission/status false) ?status]
[?e :encounter-transmission/status ?status ?last-tx true] ;; empty ?last-tx here if ?status = false by get-else
[(get-else $ ?last-tx :transaction/caused-by false) ?caused-by]]
.....)
Unfortunatelly looks like I can't call get-else on pre-last where clause#2022-11-1414:13favilaConsider pulling from ?e instead of extracting fields using where clauses#2022-11-1414:15ivanaActually I'm pulling ?e, just tried to show you simple example. But anyway, even if I pull ?e, how can I access last transaction which added status attribute (if it was added at all)?#2022-11-1414:15favilaWhy can you not use get-else? Is it cardinality many?#2022-11-1414:18ivanaCause I cant call get-else with 5 or 6 parameters
May you show how I can use it for pre-last clause in my query?#2022-11-1414:50favilaSorry my fault for not reading this carefully#2022-11-1414:50favilaso you want to capture last-tx, but default to something if there is none.#2022-11-1414:50ivanaYep, exactly#2022-11-1414:51favilayou can either write your own get-else, or use a rule with two branches: one that matches if status is absent, and one if its present, and both bind to ?last-tx#2022-11-1414:53ivanaSorry, I'm not an expert in Datalog - may you show me the clause code for it?#2022-11-1414:53ivana2 branches rule I mean#2022-11-1414:56favila'[:find ?e ?status ?last-tx
:in $ [?e ...]
:where
[?e :encounter-transmission/plan]
(or
[?e :encounter-transmission/status ?status ?last-tx]
(and
(not [?e :encounter-transmission/status])
[(ground false) ?status]
[(ground -1) ?last-tx]))]{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-11-1415:06ivanaThanks alot, I checked it, and looks like it works! Didn't check it's performance on real data, but on test one it returns all that needed!#2022-11-1713:30thumbnailI noticed the docs may contain a typo:
> Us sync only to enforce cross-client causal relationships.
I think the first word should be Use.
page: https://docs.datomic.com/on-prem/transactions/client-synchronization.html#conclusion{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-11-2310:06CaseyFor REPL development when using dev-local with durable data, how can one use attribute predicates? I am getting "Unable to load namespace for..." errors.#2022-11-2320:45favilais the namespace + var of the attribute predicate in your process?#2022-11-2320:46favilaYou should be able to do (requiring-resolve sym-of-the-attr-predicate)#2022-11-2320:46favilain a repl#2022-11-2319:56Dustin Getzin cloud how is a with-db different than a db, can I just always use with-db ?#2022-11-2320:12favilawith-db is a stateful value kept by the remote side#2022-11-2320:12favilain on-prem, that db is just a db object with extra novelty on it; but in cloud it has to be retained by the server and you get a reference to it#2022-11-2320:13favilathis also means you need to ensure session-stickiness, and you might lose access to it#2022-11-2320:13favilaoddly there’s no explicit resource management so I have no idea how you release it or when it is released.#2022-11-2321:08Dustin Getzoh because in cloud the with-db is remote from the app, so you have to move the speculative txn from app to query server and hopefully only once#2022-11-2321:08favilayes, “with-db” is “make a with-db and give me a pointer to it”{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-11-2321:09Dustin Getzthank you#2022-11-2411:24Ben HammondHi.
Trying to find the datomic-cloud repository that is hosting 1.0.119
The datomic-cloud repository link on https://docs.datomic.com/cloud/releases.html#current
takes me to
https://docs.datomic.com/cloud/ions/ions-reference.html#libraries
which I find unexpected
and the old
:mvn/repos {"datomic-cloud" {:url ""}}
just isnt working for me....
what is the new repo url?#2022-11-2411:36Ben Hammondoh its
com.datomic/ion {:mvn/version "1.0.59"}
that I have the problem with#2022-11-2411:49Ben Hammondthis is on Windows 11#2022-11-2411:49Ben Hammondmaybe that is my problem...#2022-11-2412:01Ben HammondPS C:\Users\ben\dev\clj\foobar> clj
Downloading: com/datomic/ion/1.0.59/ion-1.0.59.pom from datomic-cloud
Downloading: com/datomic/ion/1.0.59/ion-1.0.59.jar from datomic-cloud
Error building classpath. Could not find artifact com.datomic:ion:jar:1.0.59 in central ()
#2022-11-2412:08Ben Hammond#2022-11-2412:08Ben Hammondcould it be an aws permissioning thing, I wonder#2022-11-2415:01Ben HammondI can manually copy the files from my (working) ubuntu box to my not-working Win11 box
which will have to be good enough for now#2022-11-2416:07Ben Hammond(oh its Turkey-Dinner day isn't it)#2022-11-2418:08Ben Hammondwishing you happy turkey dinner days#2022-11-2814:28jaret@U793EL04V IRCC you have to have some aws perms. Do you have AWS Client installed?#2022-11-2814:46Ben Hammondyes I do#2022-11-2518:00caleb.macdonaldblackWhat’s the best way to go about debugging this issue?
datomic.client.api/transact api.clj: 200
datomic.client.api.protocols/fn/G protocols.clj: 72
datomic.client.api.sync/eval2257/fn sync.clj: 104
datomic.client.api.async/ares async.clj: 58
clojure.lang.ExceptionInfo:
• It’s an exception that is thrown periodically. Maybe once every few hours or so.
• We have “Basic” monitoring enabled
• Running these stacks: https://s3.amazonaws.com/datomic-cloud-1/cft/973-9132/datomic-storage-973-9132.json & https://s3.amazonaws.com/datomic-cloud-1/cft/973-9132/datomic-compute-973-9132.json
• Nothing in the dashboard seems to look out of place or correlate with when these exceptions happen.
• No alerts in the logs or anything else that looks out of place.
• No cloudwatch alarms
• It’s happening in our production and staging environments which are completely separate deployments on different AWS accounts.
• Maybe it’s just a hiccup in the network however it seems too frequent.
I’m considering persisting the exception in-memory and then interrogating it through a remote REPL.
There is no monitoring enabled for API-Gateway. Maybe I could enable that?
Any other ideas?#2022-11-2518:10Joe Lane• Is this happening while you're transacting? Querying? Both?
• How long is this request taking before it returns this exception?
• What size instance are you using?
• Do you have query groups or are all of these requests going to your primary compute group?#2022-11-2518:31caleb.macdonaldblack• Is this happening while you’re transacting? Querying? Both?
◦ Transacting
• How long is this request taking before it returns this exception?
◦ Hard to tell, although it appears to be not long at all. < 100ms
• What size instance are you using?
◦ t3.small - Dashboard shows CPUUtilization around 2% and memory below 800MB
• Do you have query groups or are all of these requests going to your primary compute group?
◦ We have a compute stack and a storage stack only. So no query groups I believe
#2022-11-2519:15Joe Lane@U3XCG2GBZ Why don't you open a https://www.datomic.com/support.html with us and we can check it out after the break?#2022-11-2519:15caleb.macdonaldblackNo worries#2022-11-2609:27cl_jHi everyone! How do we know if a datomic.ion.cast/alert succeeded or failed, check exception or the return value?#2022-11-2814:55BenjaminJo can transact fail transiently when called from an ion? Aka should I retry#2022-11-2918:31jaretDatomic Cloud 981-9188 now available!
https://forum.datomic.com/t/datomic-cloud-981-9188/2163{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2022-12-0205:59onetomI'm getting :dependency-conflicts on ion-dev :push operations:
:dependency-conflicts
{:deps
#:com.datomic{client-api #:mvn{:version "1.0.58"},
client #:mvn{:version "1.0.126"},
client-impl-shared #:mvn{:version "1.0.98"},
client-cloud #:mvn{:version "1.0.120"}},
Shouldn't the ion-dev library be upgraded on every compute group environment upgrade too, since it seems to contain the list of implicit dependencies present on ion servers?
❯ unzip -p ~/.m2/repository/com/datomic/ion-dev/1.0.306/ion-dev-1.0.306.jar datomic/ion/dev/cloud-deps.edn | clojure -M -e "(use 'clojure.pprint) (pprint (read-string (slurp *in*)))"
I've asked about this issue on https://forum.datomic.com/t/datomic-cloud-981-9188/2163/2 but no response yet, so im not sure if that was the right place to ask.#2022-12-0216:48nottmeyI have a Datomic Cloud 973-9132 setup and wanted to upgrade following the https://docs.datomic.com/cloud/operation/upgrading.html#storage-and-compute.
I clicked “Update” on the storage stack and provided the newest template (https://s3.amazonaws.com/datomic-cloud-1/cft/981-9188/datomic-storage-981-9188.json), then set “Reuse Existing Storage” to true and started the update without changing any options.
Then this happened (see first screenshot). When I retry it errors (second screenshot), when I roll back it also errors (third screenshot). How do I proceed? 😄#2022-12-0217:37Joe Lane@nottmeyHave you performed the "Split Stack" operation? https://docs.datomic.com/cloud/operation/split-stacks.html#2022-12-0218:04nottmeyAh, I missed that requirement 🙈
I’m just running a master stack system like described here: https://docs.datomic.com/cloud/getting-started/start-system.html#2022-12-0218:05nottmeySo do I need to split the stacks for upgrading?#2022-12-0409:40nottmeyAlright, thank you ☺️#2022-12-0420:44jdkealycan i copy an entire dynamo table and point a transactor to it and restore that way ?#2022-12-0421:06Joe LaneHave you tried Datomic's Backup/Restore functionality?#2022-12-0421:07jdkealyyes, but the DB is 75 gigs#2022-12-0421:07jdkealyi want to try excising some data, but don’t wanna do it on prod#2022-12-0421:08Joe LaneAre you trying to tell me the database is large or small when you say 75 gigs?#2022-12-0421:13Joe LaneMaking a backup in S3 (which is only the delta from your last backup, if you have one), then restoring to a DIFFERENT DDB table is significantly faster than "copying" (do you mean DDB Backup/Restore or listing all items and reading/putting them?) the DDB Table.
Especially with the latest release, which cut backup times in half.
Just make sure to run it from an EC2 instance.#2022-12-0421:36jdkealyoh sorry… the DB is large at 75 gigs#2022-12-0421:38jdkealyreally the root of my question was using DDB to copy the table, the current setup is quite brittle and i kinda didn’t wanna touch production servers#2022-12-0421:40Joe LaneI still think you should use datomic's backup/restore.#2022-12-0421:41jdkealyok, i don’t think there are 75GB of space on my transactor#2022-12-0421:41jdkealydon’t i need to backup, then upload ?#2022-12-0421:42Joe LaneNope! Check out our https://docs.datomic.com/on-prem/operation/backup.html page. We don't touch the disk when your backup-uri is an S3 bucket.#2022-12-0421:44Joe LaneAlso, the backup and restore jobs don't have to run on the transactor. You can run them from a separate EC2 Instance (as long as you give the right permissions)#2022-12-0421:48jdkealyoh that’s great#2022-12-0601:45jdkealygiven that you can do everything with S3 and not touch disk, is the 10Billion datoms data limit still relevant ?#2022-12-0601:50Joe LaneA few things:
• There is no 10Billion datom limit, never has been. It's more of a recommendation to have a conversation with us.
• I don't see how using S3 for backups is related. Can you expand on this for me a little bit?#2022-12-0601:51jdkealyi thought one of the reasons was that after 10 billion datoms, backups became to large to put on any normal disk size#2022-12-0601:51jdkealytoo*#2022-12-0602:07Joe LaneNever heard that one before, but I can assure you, there is no 10 billion limit.#2022-12-0917:06Wes HallAnybody happen to know if the storage stack CF template allows for the overriding of this behaviour of naming the resources after the stack name? This coupling seems less than ideal when it means I can't put the word "datomic" in my datomic stack names or I have to put up with "department of redundancy department" names to all my resources.
No way to call the stack "datomic-codename-storage" and have the system just be called "codename"? Or would I need to hack the template?#2022-12-1020:14jdkealywhat's wrong with my s3 restore ?
/datomic/datomic/bin/datomic restore-db s3:/my-bucket/48-2022/a "datomic:"
error =>
java.lang.IllegalArgumentException: S3 URI is incomplete, missing (:bucket)#2022-12-1020:15favila not s3:/my-bucket#2022-12-1020:18jdkealyoh dang thanks. I could have sworn I saw docs with one slash#2022-12-1218:42neilprosserAround three hours ago our Cloud compute group (which was upgraded to latest this morning) seems to have stopped keeping the memory index size in check. About an hour ago it stopped accepting transactions reporting the "Busy indexing" anomaly. IndexMemMb which never normally gets above 400 or so flat-lined at 2.23k. Should we ever have a situation where we wait over an hour for the indexing process to catch-up? It's not something I've seen before.#2022-12-1218:50Joe LaneIf you haven't already, you should open a support case with us.#2022-12-1218:52neilprosserThanks @U0CJ19XAM. I've just done it.#2022-12-1220:40Daniel JompheWe too are on the newest version - please keep us posted with your conclusions if ever the cause might be found in non-edge-case situations!#2022-12-1220:56steveb8nAlso interested in this. To avoid an outage, we'd be interested in an update on this one#2022-12-1221:57jaret@U0514DPR7 @U0510KXTU we have reproduced the problem and are working with Neil to address the issue. We will update everyone shortly.{:tag :div, :attrs {:class "message-reaction", :title "ok_hand::skin-tone-3"}, :content ({:tag :span, :attrs {:class "emoji"}, :content nil} " 1")}
#2022-12-1222:46jaretHello Datomic Cloud Users! If you are currently running 981-9188. We ask that you downgrade to 973-9132.#2022-12-1307:56onetomare there any explicit downgrade instructions?#2022-12-1307:57onetomhttps://docs.datomic.com/cloud/operation/upgrading.html#do-not-downgrade just says don't downgrade or "contact support".
my main question is whether we should downgrade the compute stack 1st or the storage stack?#2022-12-1307:58onetomand should we downgrade all the libraries ion, client-cloud etc too?#2022-12-1311:56onetomthanks @U1QJACBUM for your quick support email answer!
i will put your answer here for others too:
> You should downgrade all stacks. The order of downgrade does not matter, but the critical stack to downgrade is primary compute. You can perform this operation by using the templates for 973-9132 and following our update stack documentation. Select the stack you want to downgrade, then select update stack, but use the 973-9132 templates.
#2022-12-1316:39jaret@U086D6TBN Sorry for missing your other question. It's been a long night of assessing this issue. You do not need to downgrade ion, or client-cloud. We recommend removing any usage of io-stats you may have added.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 1")}
#2022-12-1222:46jarethttps://forum.datomic.com/t/critical-notice-for-datomic-cloud-customers/2169#2022-12-1222:46jaretCC @U0514DPR7 @U031K8CUX6D @U0510KXTU who were interested on the other thread. And special thanks to Neil for allowing us access to their system to be able to quickly identify the issue.{:tag :div, :attrs {:class "message-reaction", :title "+1"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👍")} " 2")}
#2022-12-1302:13jdkealyIf I restore my DB to a new table and use a new transactor address, will the transactor write its new location to storage ?#2022-12-1302:22jdkealydoes could not read transactor location from storage mean it can't resolve the address or literally that it can't find the location ?#2022-12-1302:25jdkealyI just finished like a 50GB restore. I restarted datomic and restarted the peers and get :db.error/read-transactor-location-failed I'm wondering if that means the transactor never wrote its location or if it means it can't resolve. I'm looking at the dynamo table is it's like 50GB, but it's impossible to decipher if it wrote its location or not.#2022-12-1302:30Joe LaneYou need a transactor running for the peer to connect#2022-12-1302:34jdkealyright! I do have one running#2022-12-1302:36jdkealyI just pointed it to a fresh new dynamo table#2022-12-1303:26jdkealyI have a successful datomic deploy in kubernetes, when I call netcat on the host I get
{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "deacb1b1aa9ebdb2b4f3e6ebeaebeae8e8eae6baf3b9b0b0b2e7"}, :content ("[email protected]")}
I have a failing deploy where it says could not read transactor location from storage and the same command reads
{:tag :a, :attrs {:href "/cdn-cgi/l/email-protection", :class "__cf_email__", :data-cfemail "c4b6ababb084a6a2a5e9a5b4ade9f2a0f1a2f2a2fcf0a6a6e9a8f6bca8b3"}, :content ("[email protected]")}
Could that be a clue as to what's going on ?#2022-12-1316:38jaretThe implication of this is that the transactor machine does not have the permissions to communicate to underlying storage. This could be AWS perms, network perms, or other storage level permissions. What's your underlying storage?#2022-12-1317:02jdkealydynamo#2022-12-1317:02jdkealybut when i do this from the transactor machine, i get Connection to datomic (172.20.249.103) 4334 port [tcp/*] succeeded!#2022-12-1317:07jdkealyIt's difficult to tell where the problem exists with this message.
I know the transactor can communicate with Dynamo because I see its heartbeart incrementing and the transactor location.
I believe the peer can connect to Dynamo because I think I would have gotten an AWS error.#2022-12-1405:01jdkealyare there any additional debugging steps that could help determine this @U1QJACBUM#2022-12-1412:16jaret@U1DBQAAMB can you SSH or something onto the box with the transactor and reach DDB through the AWS API? Additionally, are there logs from transactor startup on that box for the transactor?#2022-12-1418:26jdkealyI got to the bottom of the issue by REPL'ing onto the peer and calling d/list-databases.
It could not reach storage. I feel like there could be a better error message like "access denied to dynamodb table" or something.#2022-12-1309:32Quentin Le GuennecHello, can I have multiple physical databases for a single transactor? Or do I have to deploy a new transactor for, eg, a qualification database? (which could be a bit expansive)#2022-12-1311:58onetomwhat do u mean by physical database?
also, are you talking about datomic on-prem or datomic cloud?
u can run the d/create-database function multiple times against the same datomic client.#2022-12-1316:36jaretYeah to echo Tamás questions... we'd need to know what you mean by "physical database." But assuming you mean having multiple datomic databases on the same datomic system, i'd then need to know if you are using Datomic on-prem or Datomic cloud. In both instances you can have multiple databases on the same system, but there are caveats and depends on what your databases will be used for in on-prem. Essentially, on-prem allows you to have multiple databases, but we recommend having a single DB per system with potentially smaller operational DBs alongside. In On-prem the transactor has to hold the memory index for each DB and thus multiple DBs with high throughput is not a great fit for on-prem. Cloud however is more suited for a multi-tenant model. We describe all the implications of multiple databases in on-prem here: https://docs.datomic.com/on-prem/operation/capacity.html#multiple-databases#2022-12-1411:34Martin AkinolaHello everyone, has anyone been able to succesfully connect datomic to presto on emr,#2022-12-1615:33daemianmackwhat’s the relationship of dev-local feature that is part of client-cloud as of 2020-07-17 to the dev-local distributed via dev-tools?
i see dev-local is still available as part of the dev-tools distribution. can we ignore that if we upgrade to a modern version of client-cloud?#2022-12-1721:05jdkealyi see datomic.objectCacheMax default = 50% of VM RAM If my peer connects to two databases, does that number bump to 100% ?#2022-12-1722:51favilaObject cache is shared among all databases#2022-12-1723:33jdkealythanks#2022-12-1803:20Drew VerleeWhy would the function datomic/create-database throw an exception?
First off, help me understand if datomic/create-database is throwing. This is what part of my stack trace looks like:
[datomic.api$create_database invoke "api.clj" 22]
[centriq_web.datomic.DatomicDB start "datomic.clj" 57]
What i see is in order 1, 2
[datomic.api$create_database invoke "api.clj" 22] <--- calls create database
[centriq_web.datomic.DatomicDB start "datomic.clj" 57] <-- 1. our app
So yeah, to me. it looks like it throws.
However, given what i see in the logs, it looks like the uri i pass create-database is the correct uri for our database, e.g if i pass it locally it works. Would it throw if it couldn't use the uri from the aws ec2 instance the app is deployed to?
I have a question on the https://forum.datomic.com/t/what-does-this-st-tell-me-is-wrong/2170 here with more details. My current theory is that it's a networking issue. But i would really like the stacktrace to say something even remotely like "can't reach or doesn't exist the thing your looking for"
What i see instead, i guess is some repeated code in the stack trace (which makes sense given it's "retrying"). But what did it retry? Create database. i guess. But why did it have to retry? I guess it should couldn't find it right? I'm going to proceed assuming thats it 🐎 .#2022-12-1805:31Joe LaneWhat version is your AWS Java SDK Dependency (e.g. DynamoDB), JDK version, and Datomic Version?
The problem is in the stacktrace:
> {:type java.lang.ClassNotFoundException
> :message "com.amazonaws.retry.RetryMode"
> :at [java.net.URLClassLoader findClass "URLClassLoader.java" 382]}]#2022-12-1805:46Drew VerleeThanks!
I'll grab that information, its harder then it should be to sure.
I assume the problem is described in the stacktrace, but i don't understand how.
How i read the part you highlighted is "i wasn't able to find a function, maybe the url class loader"
Then an unrelated message that isn't at all a message but just the words retry, implying that something was... retried.
What do you read?#2022-12-1806:04Drew Verleejdk: openjdk8
datomic version:
[com.datomic/datomic-pro "0.9.5966" :exclusions [com.google.guava/guava commons-codec org.apache.httpcomponents/httpcore org.apache.httpcomponents/httpclient]]
is this the "aws java sdk? i don't see how that dynamodb related, but it looks to be the closest thing:
[com.amazonaws/aws-java-sdk-core "1.11.664" :exclusions [com.fasterxml.jackson.core/jackson-annotations com.fasterxml.jackson.core/jackson-databind]]
[org.sharetribe/aws-sig4 "0.1.3"]#2022-12-1806:06Drew Verleei'll be asleep soon. thanks for even looking at this 🙂 . Err if you don't have much time just get me your gut call on what the issue is, im thinking networking right now.#2022-12-1806:07Drew Verleealso, your first thoughts on how to troubleshoot this kind of thing might be just, if not more helpful for me.#2022-12-1815:26Joe LaneMy read is:
• Your apps call to d/create-database is the first time the AWS client is initialized (because it has to connect to DynamoDB)
• When that happens, Datomic requires / imports the AWS and DynamoDB specific classes it needs. You can see that after [datomic.require$require_and_run invokeStatic "require.clj" 21] in the stacktrace.
• Inside the datomic.aws namespace (per the stacktrace section w/ [datomic.aws__init <clinit> nil -1] ) , we attempt to load the class com.amazonaws.retry.RetryMode (not use it, just load the class)
• Per the below message, we can't find the class.
{:type java.lang.ClassNotFoundException
:message "com.amazonaws.retry.RetryMode"
:at [java.net.URLClassLoader findClass "URLClassLoader.java" 382]}
Not sure if your (possibly transitive) deps changed recently but for some reason that that class is not on your classpath.{:tag :div, :attrs {:class "message-reaction", :title "eyes"}, :content ({:tag :span, :attrs {:class "emoji"}, :content ("👀")} " 1")}
#2022-12-1818:56Drew Verleethanks for the perspective, i guess i had blocked the idea that it actually couldnt find the class because i'm not sure how we could have ended up there.
But ill retrace my steps.#2022-12-1818:57Joe LaneBefore you create the database, try listing databases and see if you get the same result#2022-12-1821:05jdkealyI tried to excise an attribute over 24 hours ago, and still don't see any changes.
{:db/excise :cart/session
:db.excise/before #inst "2030"}#2022-12-1821:15Joe LaneHave you run an indexing job?
From https://docs.datomic.com/on-prem/reference/excision.html
>> While the excise request itself is transactional, the excision operation is not transactional – the effect of excision is a background operation that occurs during the first indexing job after an excision transaction. More than one excision can occur between indexing jobs, and you should avoid attempting to repeatedly excise/requestIndex in an attempt to make excision feel synchronous. It's not. If you need to coordinate with a database that is guaranteed to have your excision, you can accomplish this with syncExcise.#2022-12-1821:44jdkealyI called request-index, yes. Weirdly the attributes just went away.#2022-12-1821:45jdkealyThe transactor had restarted and the pods too, and now I see no references to the attribute I excised I'm doing a dry-run for a migration to a different stack, and I'd like to have a little more clarity. The old devs are giving the JVM 16GB of RAM due to performance issues that I zero'd in as stemming from this one attribute.#2022-12-1821:49jdkealyI'm trying to get the RAM requirements down to 2GB. The stack dies when it hits this one query which has 80M records. A very simple query
(d/q '[:find ?e
:in $ ?tip
:where
[?e :cart/session ?tip]]
db-conn token)
with drastic consequences.
Now that that's been excised, I'm down to 4-8GB RAM required.#2022-12-1823:18Joe LaneHow many results does that query return?
Is that a Cardinality many attribute?
Is that attribute indexed? Unique?
What version of datomic are you running?
You keep saying “required” but I’m not clear what that means to you.
I understand cost saving, but why is getting the peer running with 2g an objective?#2022-12-1901:29favilaI assume this is a merely representative query and not the full query. But if it is the full query, consider using d/datoms and processing lazily if you need bounded memory use#2022-12-1901:30favilaUsing only 2gb for a peer with ad hoc query loads is really aggressive #2022-12-1901:32favilaPeers are more like “read replicas” in a traditional sql db and should be sized accordingly, unless you have written queries very carefully with memory use and small intermediate result sets in mind#2022-12-1917:37jdkealyWell, I want it down to 2GB in staging. Not immediately crashing with 2GB is a goal.#2022-12-1917:52jdkealyWe've already paid for reserve instances in AWS, and seeing slowness with 16GB of RAM. If we ever needed go beyond 16GB, we'd be in a lot of trouble. 2GB without crashing sounds like a reasonable goal to optimize for prod#2022-12-1918:20Joe Lane"seeing slowness", there are more approaches than just adding RAM to improve Datomic Performance.#2022-12-1821:50jdkealyIs there any way to see the status of the indexing job or to know when it's scheduled to run ?#2022-12-1823:19Joe LaneYou can check the logs and metrics#2022-12-1917:01frankitoxI'm reading the excise docs and it says
> Large excisions can trigger indexing jobs whose execution time is proportional to the size of the entire database, leading to back pressure and reduced write availability.
What is write availability?#2022-12-1917:13favilaAbility to write, i.e. transact#2022-12-1918:01frankitoxI'm trying to figure out how many datoms I can excise before triggering back pressure. I'm thinking maybe memory-index-max / 2 . Is there an estimate of how many datoms a mb is?#2022-12-2009:36icemanmeltingHi, I have a question regarding datomic peer. I have been doing some tests, and got to the point where I wanted to delete the db I was using and free up the space. I have used the gc-deleted-dbs, but the space keeps increasing instead. Is there an extra step after this is run? Thanks in advance#2022-12-2014:17jaretWhat is your underlying storage? gcStorage and gc-deleted-dbs mark garbage for collection by underlying storage. So if your underlying storage requires you to manage or run vacuum etc then you will need to run those utilities. I believe per DDB docs, DDB takes 24 hours to collect.#2022-12-2014:17icemanmeltingIt is PG#2022-12-2014:17icemanmeltingpostgresql#2022-12-2014:20jaretYeah with postgres you need to run vacuum after to fully reclaim the space: https://www.postgresql.org/docs/9.1/sql-vacuum.html#2022-12-2014:21icemanmeltingThanks! I actually learned something 😄 This will come in handy#2022-12-2121:48jdkealyIs there any way to see the size of the DB index ?#2022-12-2309:44icemanmeltingLet’s say that I have an attribute set to :db/fulltext true, and the content is “this is it guys”, why would the fulltext query, only match “guys” and ignore the rest of the words in that string? This is the query i am using btw, and :f-text/value is the attribute that was set to :db/fulltext true.
(d/q '[:find ?id ?value
:keys id value
:where
[?p :f-text/id ?id]
[(fulltext $ :f-text/value "this") [[?p ?value]]]]
(d/db @conn))#2022-12-2405:20Joe Lane@U9BQ06G6T
"this", "is", and "it" are common "stop words" in lucene, the fulltext-search library Datomic uses. These stop-words are filtered out when Lucene analyzes strings.
This https://stackoverflow.com/questions/17527741/what-is-the-default-list-of-stopwords-used-in-lucenes-stopfilter post has more details and links.#2022-12-2313:41andre.stylianosDoes anyone know if there’s a way to configure the size limit for the request body in a datomic ions env? We’re handling some file uploads and anything above 8MB gives us a “413 (Request Entity Too Large)“. That seems to match http-kit s default, but I don’t know if that’s what ions use under the hood and, if that’s the case, whether we have control over it?#2022-12-2401:47steveb8nbest to avoid it altogether. you can use a pre-signed url to handle the upload with an event bridge hook back to the Ion lambda. Ion can also generate the pre-signed url using the Java SDK#2022-12-2618:29bhurlowis there a way to get io-stats from a d/pull? (on-prem)?#2022-12-2618:32bhurlowok, this is very obvious in hindsight:
(d/pull db expr eid :io-context :foo/bar)
#2022-12-2712:28jaretglad you got it. If there is something you think we could/should add into the docs to help better use io-stats please let me know.#2022-12-2716:21Setzer22I'm using datomic (on-prem) and I need to move data from one datomic instance to another. More specifically, I need to send data from A and B in such a way that when the operation ends, B contains a replica of A's data. For that, I was trying to use the backup-db and restore-db scripts. My logic seems correct: I can backup and restore from the same database (from A to A), but I'm getting an error when attempting to restore a backup made from another database (from A to B). Any idea of how to fix this?
java.lang.IllegalArgumentException: :restore/collision The name 'datomic' is already in use by a different database
at datomic.error$arg.invokeStatic(error.clj:79)
at datomic.error$arg.invoke(error.clj:74)
at datomic.error$arg.invokeStatic(error.clj:77)
at datomic.error$arg.invoke(error.clj:74)
at datomic.backup$create_restore_target.invokeStatic(backup.clj:460)
at datomic.backup$create_restore_target.invoke(backup.clj:449)
at datomic.backup$restore_db.invokeStatic(backup.clj:490)
at datomic.backup$restore_db.invoke(backup.clj:481)
at datomic.backup$restore.invokeStatic(backup.clj:571)
at datomic.backup$restore.invoke(backup.clj:568)
at datomic.backup_cli$restore.invokeStatic(backup_cli.clj:53)
at datomic.backup_cli$restore.invoke(backup_cli.clj:44)
at clojure.lang.AFn.applyToHelper(AFn.java:154)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.lang.Var.applyTo(Var.java:705)
at clojure.core$apply.invokeStatic(core.clj:667)
at clojure.core$apply.invoke(core.clj:662)
at datomic.require$require_and_run.invokeStatic(require.clj:22)
at datomic.require$require_and_run.doInvoke(require.clj:17)
at clojure.lang.RestFn.invoke(RestFn.java:423)
at datomic$_main$fn__163.invoke(datomic.clj:150)
at datomic$_main.invokeStatic(datomic.clj:149)
at datomic$_main.doInvoke(datomic.clj:142)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.lang.Var.applyTo(Var.java:705)
at clojure.core$apply.invokeStatic(core.clj:667)
at clojure.main$main_opt.invokeStatic(main.clj:514)
at clojure.main$main_opt.invoke(main.clj:510)
at clojure.main$main.invokeStatic(main.clj:664)
at clojure.main$main.doInvoke(main.clj:616)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.lang.Var.applyTo(Var.java:705)
at clojure.main.main(main.java:40)#2022-12-2716:34Setzer22Note that this error does not occur when restoring a backup made from the same database (it works fine in that case), only when trying to move data between databases.#2022-12-2716:59Joe Lane@jsanchezf Do you already have a different database in the destination system named “datomic”?
the :restore/collision error message claims that is the case#2022-12-2717:02Setzer22@lanejo01 yes. It seems what I'm trying to do is removing that existing database and overwrite it with this new one keeping the same name datomic. But I'm not sure how to do that#2022-12-2717:05Setzer22I'm having trouble understanding what datomic even means when it tells me there's an existing database called "datomic". I don't remember creating anything like a datomic "db" or giving it a name. I am using postgres as my storage, and I remember creating a schema for datomic inside postgres, but I don't think this what datomic is referring to#2022-12-2717:56ghadiyou may have restored twice unwittingly #2022-12-2718:56Setzer22I think I'm not explaining myself clearly 😅 Let me explain a bit better. A is a production system, B is a staging system. What I want to do is to move the data from A (a fully-functioning datomic database holding our user data) into B (another fully-functioning datomic database holding whatever dummy data we use for testing). Both systems apparently have a database named datomic, and I can backup and restore from A to A, or from B to B as many times as I want, but I cannot restore a backup made for A into B.
The purpose of doing this is to be able to test a data migration in a controlled environment with our real user data, but in a controlled environment where doing this won't cause any unnecessary downtime.#2022-12-2718:56Setzer22Other databases have similar features. Postgres has pg_dump and pg_restore. MongoDB has mongodump and mongorestore. Both allow you to transfer data across databases. Is there a way to achieve the same thing with datomic?#2022-12-2720:50favilaA datomic db made by create-database has both its name and a uuid. The uuid is to distinguish dbs with the same name created by distinct create-database calls. Your error message is because you are attempting to restore over another db with the same name created via a different call#2022-12-2720:51favilaYou need to first remove the staging db before restoring #2022-12-2720:53favilaThe fastest way is to kill staging txor, truncate the underlying Postgres table (assuming there are no other dbs you need in it), restore from your prod backup into staging, then bring up the txor again#2022-12-2808:40Setzer22Hi @U09R86PA4, thanks a lot! By "truncating the underlying postgres table" you're referring to the datomic_kvs table, right?#2022-12-2814:57favilaYes#2022-12-2923:57Joe R. SmithThis is just bizarre, here is a query that returns the number of pets in the database by species (`:pet/species` is a ref to a :db/ident):
(d/q '[:find (count ?species) ?species
:with ?p
:in $ ?start ?end
:where
[?p :pet/id _ ?tx]
[?tx :db/txInstant ?date]
[(< ?start ?date)]
[(< ?date ?end)]
[?p :pet/species ?species]]
db start-date end-date)
The result:
[[436 7705377487454387]
[233 54830445853933752]
[20759 55613298132910258]
[304 61348350785732562]
[21339 69115300921999537]]
Cool. I want to pull the second lvar in the find clause, so I replace the find clause with:
(count ?species) (pull ?species [*])
Now I get:
[[#:db{:id 436}]
[#:db{:id 233,
:ident :pet.medical-event.record.wellness-vaccines.line-item.fiv-felv-test/fiv-positive,
:valueType #:db{:id 24, :ident :db.type/boolean},
:cardinality #:db{:id 35, :ident :db.cardinality/one},
:doc ""}]
[#:db{:id 20759}]
[#:db{:id 304,
:ident :schedule/recurrence-pattern,
:valueType #:db{:id 23, :ident :db.type/string},
:cardinality #:db{:id 35, :ident :db.cardinality/one},
:doc "Recurrence pattern"}]
[#:db{:id 21339}]]
It is pulling the count as the entity id and only returning 1 thing in each tuple.. (note the 436, 233, 20759, 304, and 21339 from the first query) 🤯#2022-12-3000:13favilaIs your :with clause still there?#2022-12-3000:13Joe R. Smithyes#2022-12-3000:14favilaI wouldn’t expect mentioning the same var more than once in a find to work#2022-12-3000:14Joe R. Smiththis is a common pattern I use to get the count and name of a thing#2022-12-3000:15Joe R. Smithfwiw, if I do the :db/ident lookup in the query, it works fine:
(d/q '[:find (count ?ispecies) ?ispecies
:with ?p
:in $ ?start ?end
:where
[?p :pet/id _ ?tx]
[?tx :db/txInstant ?date]
[(< ?start ?date)]
[(< ?date ?end)]
[?p :pet/species ?species]
[?species :db/ident ?ispecies]]
db start-date end-date)
#2022-12-3000:15favilaBut it mixes aggregation with non-aggregation in the same var #2022-12-3000:16favilaSemantically I’m not sure how they could correlate #2022-12-3000:16Joe R. Smith(d/q '[:find (count ?ispecies) ?ispecies
:with ?p
:in $ ?start ?end
:where
[?p :pet/id _ ?tx]
[?tx :db/txInstant ?date]
[(< ?start ?date)]
[(< ?date ?end)]
[?p :pet/species ?species]
[?species :db/ident ?ispecies]]
db start-date end-date)
=>
[[20759 :pet.species/cat]
[21339 :pet.species/dog]
[304 :pet.species/guinea-pig]
[436 :pet.species/rabbit]
[233 :pet.species/small-animal]]#2022-12-3000:16favilaThis seems like UB and you are lucky it worked for the count case#2022-12-3000:18Joe R. Smithcounting the pets instead of the species by pet:
(d/q '[:find (count ?p) ?ispecies
:in $ ?start ?end
:where
[?p :pet/id _ ?tx]
[?tx :db/txInstant ?date]
[(< ?start ?date)]
[(< ?date ?end)]
[?p :pet/species ?species]
[?species :db/ident ?ispecies]]
db start-date end-date)
=>
[[20759 :pet.species/cat]
[21339 :pet.species/dog]
[304 :pet.species/guinea-pig]
[436 :pet.species/rabbit]
[233 :pet.species/small-animal]]
yields same results#2022-12-3000:18Joe R. Smithadmittedly, that is more semantically correct#2022-12-3000:18favilaWhy not count ?p instead and drop the with?#2022-12-3000:18Joe R. Smith^ 😉#2022-12-3000:19Joe R. Smith... but I'm still confused by the original behavior. I'm reluctant to accept UB / garbage in/out as a reason.#2022-12-3000:20favilaI think it’s probably some implementation detail of pull leaking out#2022-12-3000:20Joe R. Smithperhaps#2022-12-3000:20favilaLike assuming vars are used only once, doing pull then replacement or something#2022-12-3000:21Joe R. Smithyeah, that seems possible#2022-12-3000:22favilaIf you pull 2x from the same var I think it throws an error #2022-12-3000:23Joe R. Smithit doesn't, but it ignores the second (which might be a clue)#2022-12-3000:24Joe R. Smithanyway, I'll chalk this up as weird but avoidable for now, maybe pick some brains over it later. 😄#2022-12-3000:25Joe R. Smithe.g., @U0CJ19XAM#2022-12-3000:25Joe LaneI've been lurking 🙂#2022-12-3000:27Joe R. SmithI suspected. As you'll see above, there is a smarter/more semantically correct way for me to do this query, but I'm curious wtf is going on anyway#2022-12-3000:32Joe LaneThe majority of my head space right now is dedicated to performance improvements, @U424XHTGT is the guy you'd want to talk to at a party about this. Regardless, if you could send a minimal repro over as a gist, I'd appreciate it.#2023-01-0418:41KeithHey @U087E7EJW, I'm still getting caught back up after the holiday break, but I wanted to drop in and let you know that I'm looking into this and will let you know what I find.
Fwiw, I've been able to reproduce what you described, and I'm glad to see you found a workaround. Good to hear from you 🙂#2023-01-0419:26Joe R. Smiththanks Keith. 🙂 Fortunately the workaround is the idiomatic way of doing it and this is just a curiosity.
I hope everything is well at the mothership. 😄#2023-01-0115:41cl_jFor a large d/q on datomic cloud query which returns a large amount of data and takes almost one minute to finish, does it make sense to split the large query into many small queries and run these queries in parallel and combine the results? I think this could help reducing the query time but I am not sure whether this can reduce the memory requirement?#2023-01-0118:21favilaSorry I thought I was replying in a thread, see channel (i’m on a phone rn)#2023-01-0201:21cl_jThanks @U09R86PA4, very useful information. Another technique I'd like to try is to use custom query/aggregate functions to filter and reduce the result set returned by Datomic, I think for large data set, this might perform better than pulling all results and doing the computation and filtering in Clojure, since this would required less data transferred from Datomic to Clojure#2023-01-0207:14favilaYou can test whether this is worth it by doing some simple aggregate with zero cost and max result set size compression (eg count) and measuring the difference with and without it#2023-01-0207:16favilaAggregates still realize the entire result set so you really only save IO and de/serialization time #2023-01-0118:14favilaI have had very good results with this technique #2023-01-0118:16favilaWhere the first where clause has a very large result set#2023-01-0118:17favilaDivide it up into chunks, run the query with just enough parallelism to pipeline (like n=2, or even n=1) and merge the results #2023-01-0118:19favilaReducing intermediate result set size makes a huge difference, IME often runs faster than a single query on an instance with a much larger heap#2023-01-0513:42nottmeyI use this query to pull out a list of my apps entities (specified by attributes returned by (keys schema)). It’s quite nice, because I directly know the count and can split it into pages via offset/limit, via a single call.
(d/qseq {:query '[:find (pull ?e [*])
:in $ [?as ...]
:where [?e ?as]]
:args [db (keys schema)]})
But it returns the entities in increasing order (by id). How do I tell the query resolver to start with the highest number? (effectively reversing the result, without loading the whole dataset)#2023-01-0513:52favilaQuery results are in arbitrary order and are eager (realize the whole set)#2023-01-0513:53favilaOnly the pull is deferred#2023-01-0513:53favilaSo if you want them in a certain order, sort in Clojure then pull, and don’t use qseq+pull#2023-01-0514:16nottmeyah wow, thanks.
what’s a the better way to bulk pull than this?
(d/q '[:find (pull ?es [*])
:in $ [?es ...]]
db
recently-updated-entities)
it seems quite fast (10x faster than pmap+pull), but feels awkward to use q for it#2023-01-0514:17favilaOn cloud/client that’s the only option#2023-01-0514:18favilaon on-prem, there’s d/pull-many#2023-01-0514:18favilaThe speed you see is from avoiding round-trips and blocking; you could recover it by pipelining#2023-01-0514:20favilaAlthough I see you tried pmap already#2023-01-0514:20favilakinda surprised at a 10x difference#2023-01-0514:23nottmeyOk, I’m on cloud. I can stick to the q version, which funnily enough needs resorting afterwards again ^^
Yes, on my local machine q (with consumption) is 30ms and the lines below are 400ms, but maybe that’s not how to use pmap ^^
(->> recently-updated-entities
(pmap #(d/pull db '[*] %))
doall)#2023-01-0514:24favilapmap is for cpu not io workloads, but on an idle system it’s a quick and dirty way to pipeline even io tasks#2023-01-0514:25favilaso that looks right to me and I expected it to be faster#2023-01-0514:25favilayou can include the sort key with your followup query to make things easier#2023-01-0514:26nottmeyah yes, good idea#2023-01-0514:26favila(->> (d/q '[:find (pull ?es [*]) ?i
:in $ [[?i ?es]]
db
(into [] (map-indexed vector) recently-updated-entities)
(sort-by peek)
(mapv first))#2023-01-0514:26favilafor e.g.#2023-01-0514:33pieterbreedI've just run into the "Loading database" issue with datomic cloud. Is there any way to prevent the ion instance from receiving requests (lambda, http) until the database has finished loading?#2023-01-1520:46cch1I’m almost certainly misunderstanding the question, because I thought ion deploys were supposed to ensure that DBs were https://docs.datomic.com/cloud/ions/ions-reference.html#deploy.#2023-01-0820:09Jakub Holý (HolyJak)Hi folks! Do you have any tips for how to use dev-local with a GitHub Action? Namely I want to run https://github.com/fulcrologic/fulcro-rad-demo/pull/42/files#diff-c9346137b015156901cb500b96f00582ba4ff714a3cd3fabd9a6cdc93496e4cfR43 but currently have to skip running it for https://github.com/fulcrologic/fulcro-rad-demo/blob/main/deps.edn#L42-L44 b/c com.datomic/dev-local is not in Maven central. Do I need to add the cognitect repo and set up ~/.m2/settings.xml with my credentials, presumably leveraging https://docs.github.com/en/rest/actions/secrets?apiVersion=2022-11-28 to store my credentials? Or is there a simpler way? Thank you! (Notice this is a publicly accessible, OSS repo.)#2023-01-0823:12vnczI wondered about that myself and that is the only possible way I found so far#2023-01-0914:08Daniel JompheFor a regular Github Action, it's very easy to setup Java and Maven's settings in one go:
- name: Install Java
uses: actions/
The above generates the following on your test runner instance:
cat ~/.m2/settings.xml
<settings xmlns=""
xmlns:xsi=""
xsi:schemaLocation=" ">
<servers>
<server>
<id>cognitect-dev-tools</id>
<username>${env.MVN_DEVTOOLS_EML}</username>
<password>${env.MVN_DEVTOOLS_PWD}</password>
</server>
</servers>
</settings>
So you need to prepare the envars for their usage when any process will want to read the maven settings file:
- name: Start backend
env:
MVN_DEVTOOLS_EML: ${{ secrets.MVN_DEVTOOLS_EML }}
MVN_DEVTOOLS_PWD: ${{ secrets.MVN_DEVTOOLS_PWD }}
run: |
mkdir -p log
DATOMIC_ENV_MAP="{:env :...}" clojure -M:dev... &> log/....log ߧ-01-1520:48cch1You can also put the jar in source (admittedly, yucky) and use :local/root in your deps.edn#2023-01-1522:51Jakub Holý (HolyJak)can you? I am not sure that the license permits that#2023-01-1522:58cch1Sorry, I was assuming that your “sources” was private to you. If that is the case, I don’t think the license prohibits you from bundling the jar with your SCM-controlled source as long as it is only “controllable” by you and used for your internal business processes. If the source ls put into a public repo then that would be a problem.#2023-01-1609:29Jakub Holý (HolyJak)no, it is all open source#2023-01-0914:08Daniel JompheFor a regular Github Action, it's very easy to setup Java and Maven's settings in one go:
- name: Install Java
uses: actions/
The above generates the following on your test runner instance:
cat ~/.m2/settings.xml
<settings xmlns=""
xmlns:xsi=""
xsi:schemaLocation=" ">
<servers>
<server>
<id>cognitect-dev-tools</id>
<username>${env.MVN_DEVTOOLS_EML}</username>
<password>${env.MVN_DEVTOOLS_PWD}</password>
</server>
</servers>
</settings>
So you need to prepare the envars for their usage when any process will want to read the maven settings file:
- name: Start backend
env:
MVN_DEVTOOLS_EML: ${{ secrets.MVN_DEVTOOLS_EML }}
MVN_DEVTOOLS_PWD: ${{ secrets.MVN_DEVTOOLS_PWD }}
run: |
mkdir -p log
DATOMIC_ENV_MAP="{:env :...}" clojure -M:dev... &> log/....log ߧ-01-0916:06jdkealyis there a way to get a datomic connection from a DB record ?#2023-01-0916:15favilano#2023-01-1100:34Joe R. SmithIs this Datomic Cloud or On-Prem?#2023-01-1100:35Joe R. Smithif cloud:
(.-conn db)
#2023-01-1400:45uwoI just tried and found out that d/q accepts a reducible as a collection argument and I'm very happy 🙂#2023-01-1413:42indyIs there a simple way to parse datomic exceptions?
{:cognitect.anomalies/category :cognitect.anomalies/conflict,
:cognitect.anomalies/message "Unique conflict: :user/email, value:
Example: I wish got the attribute :user/email and value as data instead of being embedded inside a string.#2023-01-1413:50indyFound this https://groups.google.com/g/datomic/c/kOBvvc228VM, and it seems like it hasn’t been prioritized in 6 years.#2023-01-1413:50indyFound this https://groups.google.com/g/datomic/c/kOBvvc228VM, and it seems like it hasn’t been prioritized in 6 years.#2023-01-1600:03Drew Verleecan a datomic :db.type/ref be a uuid? I feel like the answer is no, it's always an Int.#2023-01-1600:18ghadicorrect, refs are specific types, totally distinct from uuids#2023-01-1603:57Drew Verleethanks for the help 🙂#2023-01-1616:48Joe R. SmithMy Datomic codedeploys started timing out in the "BeforeInstall" step thismorning. Nothing else has changed, redeploying the same code as yesterday consistently results in a timeout at that step.
Not sure how to troubleshoot. Running prod compute 973-9132#2023-01-1616:50Joe R. SmithThe Codedeploy logs contain nothing interesting, last two lines:
Script - scripts/deploy-prepare-for-shutdown
[stderr]Started deploy-prepare-for-shutdown at 2023-01-16 16:43:30
#2023-01-1617:09Joe R. Smithall that script does is echo "prepare-for-shutdown" into a datomic process port#2023-01-1712:40icemanmeltingHi guys, I was testing Datomic on prem with postgres backend, and I have a few million datoms inserted, although I would like to count them. I have come up with the following query:
[:find (count ?e)
:where
[?e :tweet/id]]
Is there a reason for this query to take so long that it either runs out of memory, or never returns a result?#2023-01-1712:48favilaThe reason is that d/q is eager and must hold the entire result in memory before aggregation. It is not useful for large results like this. Use d/datoms instead#2023-01-1712:48icemanmeltingI was using the console, to do that query, do you know how I would go about using d/datoms in datomic console?#2023-01-1712:49favilaI don’t think you can. Use a repl#2023-01-1712:49icemanmeltingok, thanks for your help#2023-01-1714:28jaretalso if you would like the total count of datoms in the history database you can use https://docs.datomic.com/on-prem/clojure/index.html#datomic.api/db-stats#2023-01-1714:28icemanmeltingthanks for that 🙂#2023-01-1716:00icemanmeltingIt also didn’t help that my id attribute wasn’t set as unique…#2023-01-1716:42icemanmelting@U09R86PA4 question, I have used datoms like so
(d/datoms (d/db (d/connect client {:db-name "twitter"})) {:index :aevt :components [:tweet/id]})#2023-01-1716:42icemanmeltingBut how can I know how many elements are there from that structure? If I try to iterate and increment a counter, it always returns 1000, which I think might be a default chunk size#2023-01-1716:43favilaif you are using the sync api:#2023-01-1716:43favila(count (seq (d/datoms (d/db (d/connect client {:db-name "twitter"})) {:index :aevt :components [:tweet/id] :limit -1})))
#2023-01-1716:44favilaadd :limit -1 and just coerce to a seq and count it